Test Report: Hyper-V_Windows 20535

                    
                      f30cb3cfe346a634e035681bc4eff951ae572c17:2025-03-17:38751
                    
                

Test fail (13/211)

x
+
TestErrorSpam/setup (201.16s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-647700 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-647700 --driver=hyperv
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-647700 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-647700 --driver=hyperv: (3m21.1558423s)
error_spam_test.go:96: unexpected stderr: "! Failing to connect to https://registry.k8s.io/ from inside the minikube VM"
error_spam_test.go:96: unexpected stderr: "* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/"
error_spam_test.go:110: minikube stdout:
* [nospam-647700] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5608 Build 19045.5608
- KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
- MINIKUBE_LOCATION=20535
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the hyperv driver based on user configuration
* Starting "nospam-647700" primary control-plane node in "nospam-647700" cluster
* Creating hyperv VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
* Preparing Kubernetes v1.32.2 on Docker 27.4.0 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-647700" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
--- FAIL: TestErrorSpam/setup (201.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1526: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 service --namespace=default --https --url hello-node
functional_test.go:1526: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-758100 service --namespace=default --https --url hello-node: exit status 1 (15.040792s)
functional_test.go:1528: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-758100 service --namespace=default --https --url hello-node" : exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (15.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1557: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 service hello-node --url --format={{.IP}}
functional_test.go:1557: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-758100 service hello-node --url --format={{.IP}}: exit status 1 (15.0147171s)
functional_test.go:1559: failed to get service url with custom format. args "out/minikube-windows-amd64.exe -p functional-758100 service hello-node --url --format={{.IP}}": exit status 1
functional_test.go:1565: "" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (15.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1576: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 service hello-node --url
functional_test.go:1576: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-758100 service hello-node --url: exit status 1 (15.0115359s)
functional_test.go:1578: failed to get service url. args: "out/minikube-windows-amd64.exe -p functional-758100 service hello-node --url": exit status 1
functional_test.go:1582: found endpoint for hello-node: 
functional_test.go:1590: expected scheme to be -"http"- got scheme: *""*
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (15.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (69.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-450500 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-450500 -- exec busybox-58667487b6-9977c -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-450500 -- exec busybox-58667487b6-9977c -- sh -c "ping -c 1 172.25.16.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-450500 -- exec busybox-58667487b6-9977c -- sh -c "ping -c 1 172.25.16.1": exit status 1 (10.5045261s)

                                                
                                                
-- stdout --
	PING 172.25.16.1 (172.25.16.1): 56 data bytes
	
	--- 172.25.16.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.25.16.1) from pod (busybox-58667487b6-9977c): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-450500 -- exec busybox-58667487b6-w6ngz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-450500 -- exec busybox-58667487b6-w6ngz -- sh -c "ping -c 1 172.25.16.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-450500 -- exec busybox-58667487b6-w6ngz -- sh -c "ping -c 1 172.25.16.1": exit status 1 (10.4664816s)

                                                
                                                
-- stdout --
	PING 172.25.16.1 (172.25.16.1): 56 data bytes
	
	--- 172.25.16.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.25.16.1) from pod (busybox-58667487b6-w6ngz): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-450500 -- exec busybox-58667487b6-xlpx5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-450500 -- exec busybox-58667487b6-xlpx5 -- sh -c "ping -c 1 172.25.16.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-450500 -- exec busybox-58667487b6-xlpx5 -- sh -c "ping -c 1 172.25.16.1": exit status 1 (10.4879105s)

                                                
                                                
-- stdout --
	PING 172.25.16.1 (172.25.16.1): 56 data bytes
	
	--- 172.25.16.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.25.16.1) from pod (busybox-58667487b6-xlpx5): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-450500 -n ha-450500
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-450500 -n ha-450500: (12.568947s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-450500 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-450500 logs -n 25: (9.2141802s)
helpers_test.go:252: TestMultiControlPlane/serial/PingHostFromPods logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| image   | functional-758100                    | functional-758100 | minikube6\jenkins | v1.35.0 | 17 Mar 25 10:59 UTC | 17 Mar 25 10:59 UTC |
	|         | image ls --format table              |                   |                   |         |                     |                     |
	|         | --alsologtostderr                    |                   |                   |         |                     |                     |
	| image   | functional-758100 image build -t     | functional-758100 | minikube6\jenkins | v1.35.0 | 17 Mar 25 10:59 UTC | 17 Mar 25 10:59 UTC |
	|         | localhost/my-image:functional-758100 |                   |                   |         |                     |                     |
	|         | testdata\build --alsologtostderr     |                   |                   |         |                     |                     |
	| image   | functional-758100 image ls           | functional-758100 | minikube6\jenkins | v1.35.0 | 17 Mar 25 10:59 UTC | 17 Mar 25 10:59 UTC |
	| delete  | -p functional-758100                 | functional-758100 | minikube6\jenkins | v1.35.0 | 17 Mar 25 11:04 UTC | 17 Mar 25 11:05 UTC |
	| start   | -p ha-450500 --wait=true             | ha-450500         | minikube6\jenkins | v1.35.0 | 17 Mar 25 11:05 UTC | 17 Mar 25 11:16 UTC |
	|         | --memory=2200 --ha                   |                   |                   |         |                     |                     |
	|         | -v=7 --alsologtostderr               |                   |                   |         |                     |                     |
	|         | --driver=hyperv                      |                   |                   |         |                     |                     |
	| kubectl | -p ha-450500 -- apply -f             | ha-450500         | minikube6\jenkins | v1.35.0 | 17 Mar 25 11:17 UTC | 17 Mar 25 11:17 UTC |
	|         | ./testdata/ha/ha-pod-dns-test.yaml   |                   |                   |         |                     |                     |
	| kubectl | -p ha-450500 -- rollout status       | ha-450500         | minikube6\jenkins | v1.35.0 | 17 Mar 25 11:17 UTC | 17 Mar 25 11:17 UTC |
	|         | deployment/busybox                   |                   |                   |         |                     |                     |
	| kubectl | -p ha-450500 -- get pods -o          | ha-450500         | minikube6\jenkins | v1.35.0 | 17 Mar 25 11:17 UTC | 17 Mar 25 11:17 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |         |                     |                     |
	| kubectl | -p ha-450500 -- get pods -o          | ha-450500         | minikube6\jenkins | v1.35.0 | 17 Mar 25 11:17 UTC | 17 Mar 25 11:17 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl | -p ha-450500 -- exec                 | ha-450500         | minikube6\jenkins | v1.35.0 | 17 Mar 25 11:17 UTC | 17 Mar 25 11:17 UTC |
	|         | busybox-58667487b6-9977c --          |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-450500 -- exec                 | ha-450500         | minikube6\jenkins | v1.35.0 | 17 Mar 25 11:17 UTC | 17 Mar 25 11:17 UTC |
	|         | busybox-58667487b6-w6ngz --          |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-450500 -- exec                 | ha-450500         | minikube6\jenkins | v1.35.0 | 17 Mar 25 11:17 UTC | 17 Mar 25 11:17 UTC |
	|         | busybox-58667487b6-xlpx5 --          |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-450500 -- exec                 | ha-450500         | minikube6\jenkins | v1.35.0 | 17 Mar 25 11:17 UTC | 17 Mar 25 11:17 UTC |
	|         | busybox-58667487b6-9977c --          |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-450500 -- exec                 | ha-450500         | minikube6\jenkins | v1.35.0 | 17 Mar 25 11:17 UTC | 17 Mar 25 11:17 UTC |
	|         | busybox-58667487b6-w6ngz --          |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-450500 -- exec                 | ha-450500         | minikube6\jenkins | v1.35.0 | 17 Mar 25 11:17 UTC | 17 Mar 25 11:17 UTC |
	|         | busybox-58667487b6-xlpx5 --          |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-450500 -- exec                 | ha-450500         | minikube6\jenkins | v1.35.0 | 17 Mar 25 11:17 UTC | 17 Mar 25 11:17 UTC |
	|         | busybox-58667487b6-9977c -- nslookup |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-450500 -- exec                 | ha-450500         | minikube6\jenkins | v1.35.0 | 17 Mar 25 11:17 UTC | 17 Mar 25 11:17 UTC |
	|         | busybox-58667487b6-w6ngz -- nslookup |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-450500 -- exec                 | ha-450500         | minikube6\jenkins | v1.35.0 | 17 Mar 25 11:17 UTC | 17 Mar 25 11:17 UTC |
	|         | busybox-58667487b6-xlpx5 -- nslookup |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-450500 -- get pods -o          | ha-450500         | minikube6\jenkins | v1.35.0 | 17 Mar 25 11:17 UTC | 17 Mar 25 11:17 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl | -p ha-450500 -- exec                 | ha-450500         | minikube6\jenkins | v1.35.0 | 17 Mar 25 11:17 UTC | 17 Mar 25 11:17 UTC |
	|         | busybox-58667487b6-9977c             |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-450500 -- exec                 | ha-450500         | minikube6\jenkins | v1.35.0 | 17 Mar 25 11:17 UTC |                     |
	|         | busybox-58667487b6-9977c -- sh       |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.25.16.1             |                   |                   |         |                     |                     |
	| kubectl | -p ha-450500 -- exec                 | ha-450500         | minikube6\jenkins | v1.35.0 | 17 Mar 25 11:17 UTC | 17 Mar 25 11:17 UTC |
	|         | busybox-58667487b6-w6ngz             |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-450500 -- exec                 | ha-450500         | minikube6\jenkins | v1.35.0 | 17 Mar 25 11:17 UTC |                     |
	|         | busybox-58667487b6-w6ngz -- sh       |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.25.16.1             |                   |                   |         |                     |                     |
	| kubectl | -p ha-450500 -- exec                 | ha-450500         | minikube6\jenkins | v1.35.0 | 17 Mar 25 11:17 UTC | 17 Mar 25 11:17 UTC |
	|         | busybox-58667487b6-xlpx5             |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-450500 -- exec                 | ha-450500         | minikube6\jenkins | v1.35.0 | 17 Mar 25 11:17 UTC |                     |
	|         | busybox-58667487b6-xlpx5 -- sh       |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.25.16.1             |                   |                   |         |                     |                     |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/03/17 11:05:16
	Running on machine: minikube6
	Binary: Built with gc go1.24.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0317 11:05:16.652834    8508 out.go:345] Setting OutFile to fd 1296 ...
	I0317 11:05:16.727290    8508 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 11:05:16.727290    8508 out.go:358] Setting ErrFile to fd 1704...
	I0317 11:05:16.727290    8508 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 11:05:16.746123    8508 out.go:352] Setting JSON to false
	I0317 11:05:16.750127    8508 start.go:129] hostinfo: {"hostname":"minikube6","uptime":3293,"bootTime":1742206223,"procs":178,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5608 Build 19045.5608","kernelVersion":"10.0.19045.5608 Build 19045.5608","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0317 11:05:16.750127    8508 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0317 11:05:16.757118    8508 out.go:177] * [ha-450500] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5608 Build 19045.5608
	I0317 11:05:16.761124    8508 notify.go:220] Checking for updates...
	I0317 11:05:16.764126    8508 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0317 11:05:16.766135    8508 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0317 11:05:16.769121    8508 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0317 11:05:16.772120    8508 out.go:177]   - MINIKUBE_LOCATION=20535
	I0317 11:05:16.775115    8508 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0317 11:05:16.778128    8508 driver.go:394] Setting default libvirt URI to qemu:///system
	I0317 11:05:22.209480    8508 out.go:177] * Using the hyperv driver based on user configuration
	I0317 11:05:22.213793    8508 start.go:297] selected driver: hyperv
	I0317 11:05:22.213793    8508 start.go:901] validating driver "hyperv" against <nil>
	I0317 11:05:22.213793    8508 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0317 11:05:22.263169    8508 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0317 11:05:22.264743    8508 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0317 11:05:22.264743    8508 cni.go:84] Creating CNI manager for ""
	I0317 11:05:22.264743    8508 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0317 11:05:22.264743    8508 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0317 11:05:22.264743    8508 start.go:340] cluster config:
	{Name:ha-450500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-450500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 11:05:22.265671    8508 iso.go:125] acquiring lock: {Name:mk49322cc4182124f5e9cd1631076166a7ff463d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0317 11:05:22.271349    8508 out.go:177] * Starting "ha-450500" primary control-plane node in "ha-450500" cluster
	I0317 11:05:22.274188    8508 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0317 11:05:22.274424    8508 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4
	I0317 11:05:22.274499    8508 cache.go:56] Caching tarball of preloaded images
	I0317 11:05:22.274899    8508 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0317 11:05:22.275061    8508 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on docker
	I0317 11:05:22.275663    8508 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\config.json ...
	I0317 11:05:22.275915    8508 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\config.json: {Name:mk6a4b7a1771fbbf998c27c763b172cd014033ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:05:22.277349    8508 start.go:360] acquireMachinesLock for ha-450500: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0317 11:05:22.277545    8508 start.go:364] duration metric: took 72.2µs to acquireMachinesLock for "ha-450500"
	I0317 11:05:22.277887    8508 start.go:93] Provisioning new machine with config: &{Name:ha-450500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName
:ha-450500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0317 11:05:22.277963    8508 start.go:125] createHost starting for "" (driver="hyperv")
	I0317 11:05:22.280394    8508 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0317 11:05:22.281178    8508 start.go:159] libmachine.API.Create for "ha-450500" (driver="hyperv")
	I0317 11:05:22.281279    8508 client.go:168] LocalClient.Create starting
	I0317 11:05:22.281306    8508 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0317 11:05:22.281876    8508 main.go:141] libmachine: Decoding PEM data...
	I0317 11:05:22.281876    8508 main.go:141] libmachine: Parsing certificate...
	I0317 11:05:22.281876    8508 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0317 11:05:22.281876    8508 main.go:141] libmachine: Decoding PEM data...
	I0317 11:05:22.281876    8508 main.go:141] libmachine: Parsing certificate...
	I0317 11:05:22.282427    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0317 11:05:24.406557    8508 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0317 11:05:24.406557    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:05:24.406796    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0317 11:05:26.158096    8508 main.go:141] libmachine: [stdout =====>] : False
	
	I0317 11:05:26.158096    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:05:26.159005    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0317 11:05:27.688410    8508 main.go:141] libmachine: [stdout =====>] : True
	
	I0317 11:05:27.688902    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:05:27.688974    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0317 11:05:31.389058    8508 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0317 11:05:31.389058    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:05:31.392877    8508 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0317 11:05:31.916010    8508 main.go:141] libmachine: Creating SSH key...
	I0317 11:05:32.140452    8508 main.go:141] libmachine: Creating VM...
	I0317 11:05:32.140452    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0317 11:05:35.042462    8508 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0317 11:05:35.042520    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:05:35.042520    8508 main.go:141] libmachine: Using switch "Default Switch"
	I0317 11:05:35.042520    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0317 11:05:36.861622    8508 main.go:141] libmachine: [stdout =====>] : True
	
	I0317 11:05:36.861622    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:05:36.861727    8508 main.go:141] libmachine: Creating VHD
	I0317 11:05:36.861822    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500\fixed.vhd' -SizeBytes 10MB -Fixed
	I0317 11:05:40.793443    8508 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 3A17D52E-98AD-4CD0-8637-F68C66327875
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0317 11:05:40.794423    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:05:40.794473    8508 main.go:141] libmachine: Writing magic tar header
	I0317 11:05:40.794557    8508 main.go:141] libmachine: Writing SSH key tar header
	I0317 11:05:40.808435    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500\disk.vhd' -VHDType Dynamic -DeleteSource
	I0317 11:05:44.045046    8508 main.go:141] libmachine: [stdout =====>] : 
	I0317 11:05:44.046099    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:05:44.046158    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500\disk.vhd' -SizeBytes 20000MB
	I0317 11:05:46.661572    8508 main.go:141] libmachine: [stdout =====>] : 
	I0317 11:05:46.661572    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:05:46.661572    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-450500 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0317 11:05:50.342966    8508 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-450500 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0317 11:05:50.343968    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:05:50.343968    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-450500 -DynamicMemoryEnabled $false
	I0317 11:05:52.638652    8508 main.go:141] libmachine: [stdout =====>] : 
	I0317 11:05:52.638652    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:05:52.639540    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-450500 -Count 2
	I0317 11:05:54.910267    8508 main.go:141] libmachine: [stdout =====>] : 
	I0317 11:05:54.911285    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:05:54.911329    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-450500 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500\boot2docker.iso'
	I0317 11:05:57.584529    8508 main.go:141] libmachine: [stdout =====>] : 
	I0317 11:05:57.585010    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:05:57.585060    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-450500 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500\disk.vhd'
	I0317 11:06:00.282011    8508 main.go:141] libmachine: [stdout =====>] : 
	I0317 11:06:00.282011    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:06:00.282011    8508 main.go:141] libmachine: Starting VM...
	I0317 11:06:00.282011    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-450500
	I0317 11:06:03.470563    8508 main.go:141] libmachine: [stdout =====>] : 
	I0317 11:06:03.470765    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:06:03.470765    8508 main.go:141] libmachine: Waiting for host to start...
	I0317 11:06:03.470876    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500 ).state
	I0317 11:06:05.752314    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:06:05.753279    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:06:05.753347    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500 ).networkadapters[0]).ipaddresses[0]
	I0317 11:06:08.305190    8508 main.go:141] libmachine: [stdout =====>] : 
	I0317 11:06:08.305190    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:06:09.305483    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500 ).state
	I0317 11:06:11.540380    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:06:11.540380    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:06:11.540774    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500 ).networkadapters[0]).ipaddresses[0]
	I0317 11:06:14.102946    8508 main.go:141] libmachine: [stdout =====>] : 
	I0317 11:06:14.103211    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:06:15.104620    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500 ).state
	I0317 11:06:17.358582    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:06:17.358684    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:06:17.358752    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500 ).networkadapters[0]).ipaddresses[0]
	I0317 11:06:19.889431    8508 main.go:141] libmachine: [stdout =====>] : 
	I0317 11:06:19.890106    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:06:20.890785    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500 ).state
	I0317 11:06:23.132030    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:06:23.132327    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:06:23.132522    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500 ).networkadapters[0]).ipaddresses[0]
	I0317 11:06:25.659276    8508 main.go:141] libmachine: [stdout =====>] : 
	I0317 11:06:25.659944    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:06:26.661003    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500 ).state
	I0317 11:06:28.918666    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:06:28.918666    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:06:28.919215    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500 ).networkadapters[0]).ipaddresses[0]
	I0317 11:06:31.548433    8508 main.go:141] libmachine: [stdout =====>] : 172.25.16.34
	
	I0317 11:06:31.548433    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:06:31.549138    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500 ).state
	I0317 11:06:33.689233    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:06:33.689822    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:06:33.689866    8508 machine.go:93] provisionDockerMachine start ...
	I0317 11:06:33.690002    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500 ).state
	I0317 11:06:35.862241    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:06:35.862241    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:06:35.862342    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500 ).networkadapters[0]).ipaddresses[0]
	I0317 11:06:38.438530    8508 main.go:141] libmachine: [stdout =====>] : 172.25.16.34
	
	I0317 11:06:38.438530    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:06:38.444781    8508 main.go:141] libmachine: Using SSH client type: native
	I0317 11:06:38.459326    8508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.16.34 22 <nil> <nil>}
	I0317 11:06:38.459326    8508 main.go:141] libmachine: About to run SSH command:
	hostname
	I0317 11:06:38.602091    8508 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0317 11:06:38.602091    8508 buildroot.go:166] provisioning hostname "ha-450500"
	I0317 11:06:38.602091    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500 ).state
	I0317 11:06:40.724125    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:06:40.724125    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:06:40.724408    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500 ).networkadapters[0]).ipaddresses[0]
	I0317 11:06:43.272848    8508 main.go:141] libmachine: [stdout =====>] : 172.25.16.34
	
	I0317 11:06:43.273265    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:06:43.280099    8508 main.go:141] libmachine: Using SSH client type: native
	I0317 11:06:43.281056    8508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.16.34 22 <nil> <nil>}
	I0317 11:06:43.281056    8508 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-450500 && echo "ha-450500" | sudo tee /etc/hostname
	I0317 11:06:43.447356    8508 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-450500
	
	I0317 11:06:43.447356    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500 ).state
	I0317 11:06:45.558133    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:06:45.558671    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:06:45.558783    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500 ).networkadapters[0]).ipaddresses[0]
	I0317 11:06:48.047202    8508 main.go:141] libmachine: [stdout =====>] : 172.25.16.34
	
	I0317 11:06:48.047202    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:06:48.053645    8508 main.go:141] libmachine: Using SSH client type: native
	I0317 11:06:48.054245    8508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.16.34 22 <nil> <nil>}
	I0317 11:06:48.054801    8508 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-450500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-450500/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-450500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0317 11:06:48.208085    8508 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0317 11:06:48.208085    8508 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0317 11:06:48.208085    8508 buildroot.go:174] setting up certificates
	I0317 11:06:48.208085    8508 provision.go:84] configureAuth start
	I0317 11:06:48.208085    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500 ).state
	I0317 11:06:50.326279    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:06:50.326875    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:06:50.326875    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500 ).networkadapters[0]).ipaddresses[0]
	I0317 11:06:52.836115    8508 main.go:141] libmachine: [stdout =====>] : 172.25.16.34
	
	I0317 11:06:52.836115    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:06:52.837225    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500 ).state
	I0317 11:06:54.934775    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:06:54.934775    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:06:54.935139    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500 ).networkadapters[0]).ipaddresses[0]
	I0317 11:06:57.464637    8508 main.go:141] libmachine: [stdout =====>] : 172.25.16.34
	
	I0317 11:06:57.465057    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:06:57.465057    8508 provision.go:143] copyHostCerts
	I0317 11:06:57.465057    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0317 11:06:57.465057    8508 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0317 11:06:57.465647    8508 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0317 11:06:57.465874    8508 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0317 11:06:57.467642    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0317 11:06:57.467642    8508 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0317 11:06:57.467642    8508 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0317 11:06:57.468441    8508 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0317 11:06:57.469741    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0317 11:06:57.469895    8508 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0317 11:06:57.469895    8508 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0317 11:06:57.470604    8508 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0317 11:06:57.471342    8508 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-450500 san=[127.0.0.1 172.25.16.34 ha-450500 localhost minikube]
	I0317 11:06:57.574873    8508 provision.go:177] copyRemoteCerts
	I0317 11:06:57.587807    8508 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0317 11:06:57.588248    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500 ).state
	I0317 11:06:59.669944    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:06:59.670600    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:06:59.670656    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500 ).networkadapters[0]).ipaddresses[0]
	I0317 11:07:02.211738    8508 main.go:141] libmachine: [stdout =====>] : 172.25.16.34
	
	I0317 11:07:02.211738    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:07:02.212959    8508 sshutil.go:53] new ssh client: &{IP:172.25.16.34 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500\id_rsa Username:docker}
	I0317 11:07:02.322425    8508 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7345841s)
	I0317 11:07:02.322425    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0317 11:07:02.323040    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0317 11:07:02.372108    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0317 11:07:02.372652    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0317 11:07:02.424286    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0317 11:07:02.424592    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0317 11:07:02.469019    8508 provision.go:87] duration metric: took 14.2598293s to configureAuth
	I0317 11:07:02.469019    8508 buildroot.go:189] setting minikube options for container-runtime
	I0317 11:07:02.469019    8508 config.go:182] Loaded profile config "ha-450500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0317 11:07:02.469019    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500 ).state
	I0317 11:07:04.598319    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:07:04.598356    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:07:04.598441    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500 ).networkadapters[0]).ipaddresses[0]
	I0317 11:07:07.174176    8508 main.go:141] libmachine: [stdout =====>] : 172.25.16.34
	
	I0317 11:07:07.174228    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:07:07.180603    8508 main.go:141] libmachine: Using SSH client type: native
	I0317 11:07:07.181130    8508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.16.34 22 <nil> <nil>}
	I0317 11:07:07.181228    8508 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0317 11:07:07.319818    8508 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0317 11:07:07.319818    8508 buildroot.go:70] root file system type: tmpfs
	I0317 11:07:07.320138    8508 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0317 11:07:07.320218    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500 ).state
	I0317 11:07:09.447571    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:07:09.447802    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:07:09.447965    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500 ).networkadapters[0]).ipaddresses[0]
	I0317 11:07:11.977682    8508 main.go:141] libmachine: [stdout =====>] : 172.25.16.34
	
	I0317 11:07:11.977682    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:07:11.984772    8508 main.go:141] libmachine: Using SSH client type: native
	I0317 11:07:11.985456    8508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.16.34 22 <nil> <nil>}
	I0317 11:07:11.985456    8508 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0317 11:07:12.148479    8508 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0317 11:07:12.148479    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500 ).state
	I0317 11:07:14.242515    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:07:14.242923    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:07:14.243049    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500 ).networkadapters[0]).ipaddresses[0]
	I0317 11:07:16.738945    8508 main.go:141] libmachine: [stdout =====>] : 172.25.16.34
	
	I0317 11:07:16.739041    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:07:16.743692    8508 main.go:141] libmachine: Using SSH client type: native
	I0317 11:07:16.744619    8508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.16.34 22 <nil> <nil>}
	I0317 11:07:16.744619    8508 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0317 11:07:19.057620    8508 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0317 11:07:19.057620    8508 machine.go:96] duration metric: took 45.3674278s to provisionDockerMachine
	I0317 11:07:19.057620    8508 client.go:171] duration metric: took 1m56.7755093s to LocalClient.Create
	I0317 11:07:19.057620    8508 start.go:167] duration metric: took 1m56.7756106s to libmachine.API.Create "ha-450500"
	I0317 11:07:19.057620    8508 start.go:293] postStartSetup for "ha-450500" (driver="hyperv")
	I0317 11:07:19.057620    8508 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0317 11:07:19.071834    8508 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0317 11:07:19.071834    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500 ).state
	I0317 11:07:21.191454    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:07:21.191630    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:07:21.191630    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500 ).networkadapters[0]).ipaddresses[0]
	I0317 11:07:23.766217    8508 main.go:141] libmachine: [stdout =====>] : 172.25.16.34
	
	I0317 11:07:23.766404    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:07:23.766877    8508 sshutil.go:53] new ssh client: &{IP:172.25.16.34 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500\id_rsa Username:docker}
	I0317 11:07:23.882290    8508 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8103595s)
	I0317 11:07:23.894382    8508 ssh_runner.go:195] Run: cat /etc/os-release
	I0317 11:07:23.901381    8508 info.go:137] Remote host: Buildroot 2023.02.9
	I0317 11:07:23.901381    8508 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0317 11:07:23.901381    8508 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0317 11:07:23.902983    8508 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem -> 89402.pem in /etc/ssl/certs
	I0317 11:07:23.903096    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem -> /etc/ssl/certs/89402.pem
	I0317 11:07:23.914446    8508 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0317 11:07:23.931326    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem --> /etc/ssl/certs/89402.pem (1708 bytes)
	I0317 11:07:23.981721    8508 start.go:296] duration metric: took 4.924066s for postStartSetup
	I0317 11:07:23.984818    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500 ).state
	I0317 11:07:26.138443    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:07:26.138443    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:07:26.138879    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500 ).networkadapters[0]).ipaddresses[0]
	I0317 11:07:28.670115    8508 main.go:141] libmachine: [stdout =====>] : 172.25.16.34
	
	I0317 11:07:28.670846    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:07:28.671026    8508 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\config.json ...
	I0317 11:07:28.674281    8508 start.go:128] duration metric: took 2m6.3952934s to createHost
	I0317 11:07:28.674366    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500 ).state
	I0317 11:07:30.812607    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:07:30.813135    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:07:30.813234    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500 ).networkadapters[0]).ipaddresses[0]
	I0317 11:07:33.333826    8508 main.go:141] libmachine: [stdout =====>] : 172.25.16.34
	
	I0317 11:07:33.333826    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:07:33.339847    8508 main.go:141] libmachine: Using SSH client type: native
	I0317 11:07:33.340624    8508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.16.34 22 <nil> <nil>}
	I0317 11:07:33.340624    8508 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0317 11:07:33.470448    8508 main.go:141] libmachine: SSH cmd err, output: <nil>: 1742209653.491241352
	
	I0317 11:07:33.470448    8508 fix.go:216] guest clock: 1742209653.491241352
	I0317 11:07:33.470448    8508 fix.go:229] Guest: 2025-03-17 11:07:33.491241352 +0000 UTC Remote: 2025-03-17 11:07:28.6742815 +0000 UTC m=+132.126838901 (delta=4.816959852s)
	I0317 11:07:33.470448    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500 ).state
	I0317 11:07:35.607199    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:07:35.607372    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:07:35.607372    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500 ).networkadapters[0]).ipaddresses[0]
	I0317 11:07:38.265346    8508 main.go:141] libmachine: [stdout =====>] : 172.25.16.34
	
	I0317 11:07:38.265346    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:07:38.275246    8508 main.go:141] libmachine: Using SSH client type: native
	I0317 11:07:38.275246    8508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.16.34 22 <nil> <nil>}
	I0317 11:07:38.275766    8508 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1742209653
	I0317 11:07:38.431790    8508 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Mar 17 11:07:33 UTC 2025
	
	I0317 11:07:38.431904    8508 fix.go:236] clock set: Mon Mar 17 11:07:33 UTC 2025
	 (err=<nil>)
	I0317 11:07:38.431904    8508 start.go:83] releasing machines lock for "ha-450500", held for 2m16.153325s
	I0317 11:07:38.432036    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500 ).state
	I0317 11:07:40.561868    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:07:40.562587    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:07:40.562635    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500 ).networkadapters[0]).ipaddresses[0]
	I0317 11:07:43.063749    8508 main.go:141] libmachine: [stdout =====>] : 172.25.16.34
	
	I0317 11:07:43.063749    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:07:43.068810    8508 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0317 11:07:43.068991    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500 ).state
	I0317 11:07:43.078294    8508 ssh_runner.go:195] Run: cat /version.json
	I0317 11:07:43.078294    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500 ).state
	I0317 11:07:45.304947    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:07:45.304947    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:07:45.304947    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:07:45.304947    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500 ).networkadapters[0]).ipaddresses[0]
	I0317 11:07:45.304947    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:07:45.304947    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500 ).networkadapters[0]).ipaddresses[0]
	I0317 11:07:47.967051    8508 main.go:141] libmachine: [stdout =====>] : 172.25.16.34
	
	I0317 11:07:47.967575    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:07:47.968040    8508 sshutil.go:53] new ssh client: &{IP:172.25.16.34 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500\id_rsa Username:docker}
	I0317 11:07:47.989228    8508 main.go:141] libmachine: [stdout =====>] : 172.25.16.34
	
	I0317 11:07:47.989228    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:07:47.990439    8508 sshutil.go:53] new ssh client: &{IP:172.25.16.34 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500\id_rsa Username:docker}
	I0317 11:07:48.068714    8508 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.9997956s)
	W0317 11:07:48.068844    8508 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0317 11:07:48.085047    8508 ssh_runner.go:235] Completed: cat /version.json: (5.0067167s)
	I0317 11:07:48.097661    8508 ssh_runner.go:195] Run: systemctl --version
	I0317 11:07:48.116635    8508 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0317 11:07:48.126020    8508 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0317 11:07:48.136852    8508 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0317 11:07:48.166553    8508 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0317 11:07:48.166553    8508 start.go:495] detecting cgroup driver to use...
	I0317 11:07:48.166553    8508 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0317 11:07:48.198739    8508 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0317 11:07:48.198739    8508 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0317 11:07:48.212733    8508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0317 11:07:48.246298    8508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0317 11:07:48.265560    8508 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0317 11:07:48.277301    8508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0317 11:07:48.308270    8508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0317 11:07:48.337130    8508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0317 11:07:48.365607    8508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0317 11:07:48.394249    8508 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0317 11:07:48.424358    8508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0317 11:07:48.456129    8508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0317 11:07:48.486380    8508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0317 11:07:48.516837    8508 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0317 11:07:48.533901    8508 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0317 11:07:48.545548    8508 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0317 11:07:48.578255    8508 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0317 11:07:48.609242    8508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 11:07:48.805911    8508 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0317 11:07:48.836988    8508 start.go:495] detecting cgroup driver to use...
	I0317 11:07:48.848439    8508 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0317 11:07:48.882231    8508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0317 11:07:48.918400    8508 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0317 11:07:48.964536    8508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0317 11:07:49.000059    8508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0317 11:07:49.036314    8508 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0317 11:07:49.110424    8508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0317 11:07:49.140013    8508 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0317 11:07:49.188841    8508 ssh_runner.go:195] Run: which cri-dockerd
	I0317 11:07:49.207748    8508 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0317 11:07:49.235265    8508 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0317 11:07:49.287033    8508 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0317 11:07:49.504105    8508 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0317 11:07:49.686378    8508 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0317 11:07:49.686639    8508 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0317 11:07:49.730372    8508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 11:07:49.916415    8508 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0317 11:07:52.514427    8508 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5979935s)
	I0317 11:07:52.525908    8508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0317 11:07:52.560115    8508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0317 11:07:52.596189    8508 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0317 11:07:52.803774    8508 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0317 11:07:53.001991    8508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 11:07:53.197267    8508 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0317 11:07:53.238606    8508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0317 11:07:53.270323    8508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 11:07:53.456084    8508 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0317 11:07:53.566069    8508 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0317 11:07:53.577907    8508 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0317 11:07:53.587726    8508 start.go:563] Will wait 60s for crictl version
	I0317 11:07:53.597916    8508 ssh_runner.go:195] Run: which crictl
	I0317 11:07:53.613740    8508 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0317 11:07:53.666786    8508 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0317 11:07:53.676894    8508 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0317 11:07:53.717705    8508 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0317 11:07:53.757058    8508 out.go:235] * Preparing Kubernetes v1.32.2 on Docker 27.4.0 ...
	I0317 11:07:53.757284    8508 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0317 11:07:53.761648    8508 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0317 11:07:53.761648    8508 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0317 11:07:53.761648    8508 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0317 11:07:53.761648    8508 ip.go:211] Found interface: {Index:13 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:4b:84:d5 Flags:up|broadcast|multicast|running}
	I0317 11:07:53.764814    8508 ip.go:214] interface addr: fe80::f0c7:c31c:6237:ef35/64
	I0317 11:07:53.764814    8508 ip.go:214] interface addr: 172.25.16.1/20
	I0317 11:07:53.777915    8508 ssh_runner.go:195] Run: grep 172.25.16.1	host.minikube.internal$ /etc/hosts
	I0317 11:07:53.783967    8508 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.25.16.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 11:07:53.815608    8508 kubeadm.go:883] updating cluster {Name:ha-450500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-450500 Namespac
e:default APIServerHAVIP:172.25.31.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.16.34 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mo
untUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0317 11:07:53.815608    8508 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0317 11:07:53.823524    8508 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0317 11:07:53.848590    8508 docker.go:689] Got preloaded images: 
	I0317 11:07:53.848590    8508 docker.go:695] registry.k8s.io/kube-apiserver:v1.32.2 wasn't preloaded
	I0317 11:07:53.859692    8508 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0317 11:07:53.886973    8508 ssh_runner.go:195] Run: which lz4
	I0317 11:07:53.894192    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0317 11:07:53.905759    8508 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0317 11:07:53.913037    8508 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0317 11:07:53.913037    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (349803115 bytes)
	I0317 11:07:56.086887    8508 docker.go:653] duration metric: took 2.1926787s to copy over tarball
	I0317 11:07:56.098139    8508 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0317 11:08:04.581762    8508 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.4834842s)
	I0317 11:08:04.581902    8508 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0317 11:08:04.641598    8508 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0317 11:08:04.660043    8508 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0317 11:08:04.703684    8508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 11:08:04.916137    8508 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0317 11:08:08.107535    8508 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.1913743s)
	I0317 11:08:08.118452    8508 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0317 11:08:08.150187    8508 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.32.2
	registry.k8s.io/kube-controller-manager:v1.32.2
	registry.k8s.io/kube-scheduler:v1.32.2
	registry.k8s.io/kube-proxy:v1.32.2
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0317 11:08:08.150187    8508 cache_images.go:84] Images are preloaded, skipping loading
	I0317 11:08:08.150187    8508 kubeadm.go:934] updating node { 172.25.16.34 8443 v1.32.2 docker true true} ...
	I0317 11:08:08.150187    8508 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-450500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.16.34
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:ha-450500 Namespace:default APIServerHAVIP:172.25.31.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0317 11:08:08.159478    8508 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0317 11:08:08.224798    8508 cni.go:84] Creating CNI manager for ""
	I0317 11:08:08.224798    8508 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0317 11:08:08.224798    8508 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0317 11:08:08.224798    8508 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.25.16.34 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-450500 NodeName:ha-450500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.25.16.34"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.25.16.34 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/ma
nifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0317 11:08:08.225789    8508 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.25.16.34
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-450500"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "172.25.16.34"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.25.16.34"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0317 11:08:08.225789    8508 kube-vip.go:115] generating kube-vip config ...
	I0317 11:08:08.236549    8508 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0317 11:08:08.263962    8508 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0317 11:08:08.264219    8508 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.25.31.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.9
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0317 11:08:08.275028    8508 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0317 11:08:08.295667    8508 binaries.go:44] Found k8s binaries, skipping transfer
	I0317 11:08:08.307845    8508 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0317 11:08:08.325114    8508 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0317 11:08:08.355516    8508 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0317 11:08:08.383885    8508 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I0317 11:08:08.415041    8508 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1446 bytes)
	I0317 11:08:08.459141    8508 ssh_runner.go:195] Run: grep 172.25.31.254	control-plane.minikube.internal$ /etc/hosts
	I0317 11:08:08.465567    8508 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.25.31.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 11:08:08.498686    8508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 11:08:08.702254    8508 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 11:08:08.732709    8508 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500 for IP: 172.25.16.34
	I0317 11:08:08.732847    8508 certs.go:194] generating shared ca certs ...
	I0317 11:08:08.732889    8508 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:08:08.733679    8508 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0317 11:08:08.734428    8508 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0317 11:08:08.735134    8508 certs.go:256] generating profile certs ...
	I0317 11:08:08.736005    8508 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\client.key
	I0317 11:08:08.736172    8508 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\client.crt with IP's: []
	I0317 11:08:09.705510    8508 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\client.crt ...
	I0317 11:08:09.705510    8508 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\client.crt: {Name:mk792f6749124d49fe283a3b917333e6f455939f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:08:09.707542    8508 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\client.key ...
	I0317 11:08:09.707542    8508 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\client.key: {Name:mk647a2008ad32a86ebab67a6a73f60ff9f49cf6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:08:09.708213    8508 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.key.a805433c
	I0317 11:08:09.709275    8508 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.crt.a805433c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.25.16.34 172.25.31.254]
	I0317 11:08:09.920893    8508 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.crt.a805433c ...
	I0317 11:08:09.920893    8508 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.crt.a805433c: {Name:mkd850b7327a2bc3127130883e5f1b38083dd5a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:08:09.922619    8508 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.key.a805433c ...
	I0317 11:08:09.922619    8508 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.key.a805433c: {Name:mk75d42a89cfec0612d2f7dcffbd0ccb9e1383fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:08:09.924040    8508 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.crt.a805433c -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.crt
	I0317 11:08:09.937753    8508 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.key.a805433c -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.key
	I0317 11:08:09.939743    8508 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\proxy-client.key
	I0317 11:08:09.939743    8508 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\proxy-client.crt with IP's: []
	I0317 11:08:10.018587    8508 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\proxy-client.crt ...
	I0317 11:08:10.018587    8508 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\proxy-client.crt: {Name:mk28db02829d3ca8191927e42e9af9bbc1f3f5e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:08:10.020694    8508 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\proxy-client.key ...
	I0317 11:08:10.020694    8508 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\proxy-client.key: {Name:mk7f8d2926c5b727595db9114a62364d0fc7349c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:08:10.020980    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0317 11:08:10.022211    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0317 11:08:10.022370    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0317 11:08:10.022563    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0317 11:08:10.022755    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0317 11:08:10.022925    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0317 11:08:10.023114    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0317 11:08:10.032570    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0317 11:08:10.033045    8508 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8940.pem (1338 bytes)
	W0317 11:08:10.033854    8508 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8940_empty.pem, impossibly tiny 0 bytes
	I0317 11:08:10.033854    8508 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0317 11:08:10.034496    8508 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0317 11:08:10.034760    8508 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0317 11:08:10.035045    8508 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0317 11:08:10.035353    8508 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem (1708 bytes)
	I0317 11:08:10.035353    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8940.pem -> /usr/share/ca-certificates/8940.pem
	I0317 11:08:10.036166    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem -> /usr/share/ca-certificates/89402.pem
	I0317 11:08:10.036326    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0317 11:08:10.036482    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0317 11:08:10.088279    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0317 11:08:10.131133    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0317 11:08:10.176329    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0317 11:08:10.218354    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0317 11:08:10.265277    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0317 11:08:10.306874    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0317 11:08:10.351251    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0317 11:08:10.401290    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8940.pem --> /usr/share/ca-certificates/8940.pem (1338 bytes)
	I0317 11:08:10.451505    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem --> /usr/share/ca-certificates/89402.pem (1708 bytes)
	I0317 11:08:10.498103    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0317 11:08:10.543737    8508 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0317 11:08:10.589659    8508 ssh_runner.go:195] Run: openssl version
	I0317 11:08:10.608987    8508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8940.pem && ln -fs /usr/share/ca-certificates/8940.pem /etc/ssl/certs/8940.pem"
	I0317 11:08:10.639956    8508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8940.pem
	I0317 11:08:10.647256    8508 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 17 10:46 /usr/share/ca-certificates/8940.pem
	I0317 11:08:10.657878    8508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8940.pem
	I0317 11:08:10.680550    8508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8940.pem /etc/ssl/certs/51391683.0"
	I0317 11:08:10.710433    8508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/89402.pem && ln -fs /usr/share/ca-certificates/89402.pem /etc/ssl/certs/89402.pem"
	I0317 11:08:10.742685    8508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/89402.pem
	I0317 11:08:10.749737    8508 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 17 10:46 /usr/share/ca-certificates/89402.pem
	I0317 11:08:10.760758    8508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/89402.pem
	I0317 11:08:10.780972    8508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/89402.pem /etc/ssl/certs/3ec20f2e.0"
	I0317 11:08:10.811824    8508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0317 11:08:10.842803    8508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0317 11:08:10.850081    8508 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 17 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0317 11:08:10.861526    8508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0317 11:08:10.885418    8508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0317 11:08:10.915356    8508 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0317 11:08:10.925707    8508 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0317 11:08:10.926224    8508 kubeadm.go:392] StartCluster: {Name:ha-450500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-450500 Namespace:d
efault APIServerHAVIP:172.25.31.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.16.34 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 11:08:10.935184    8508 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0317 11:08:10.971556    8508 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0317 11:08:11.002448    8508 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0317 11:08:11.038158    8508 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0317 11:08:11.062175    8508 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0317 11:08:11.062271    8508 kubeadm.go:157] found existing configuration files:
	
	I0317 11:08:11.073756    8508 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0317 11:08:11.095067    8508 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0317 11:08:11.109979    8508 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0317 11:08:11.140209    8508 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0317 11:08:11.157259    8508 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0317 11:08:11.168957    8508 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0317 11:08:11.200732    8508 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0317 11:08:11.222130    8508 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0317 11:08:11.234462    8508 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0317 11:08:11.263987    8508 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0317 11:08:11.284038    8508 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0317 11:08:11.295071    8508 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0317 11:08:11.313654    8508 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0317 11:08:11.789699    8508 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0317 11:08:26.460567    8508 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0317 11:08:26.460686    8508 kubeadm.go:310] [preflight] Running pre-flight checks
	I0317 11:08:26.460772    8508 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0317 11:08:26.460960    8508 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0317 11:08:26.461276    8508 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0317 11:08:26.461428    8508 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0317 11:08:26.468407    8508 out.go:235]   - Generating certificates and keys ...
	I0317 11:08:26.468660    8508 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0317 11:08:26.468776    8508 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0317 11:08:26.468890    8508 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0317 11:08:26.468890    8508 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0317 11:08:26.468890    8508 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0317 11:08:26.469498    8508 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0317 11:08:26.469621    8508 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0317 11:08:26.469989    8508 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-450500 localhost] and IPs [172.25.16.34 127.0.0.1 ::1]
	I0317 11:08:26.470311    8508 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0317 11:08:26.470495    8508 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-450500 localhost] and IPs [172.25.16.34 127.0.0.1 ::1]
	I0317 11:08:26.470495    8508 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0317 11:08:26.470495    8508 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0317 11:08:26.471051    8508 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0317 11:08:26.471234    8508 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0317 11:08:26.471368    8508 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0317 11:08:26.471415    8508 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0317 11:08:26.471415    8508 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0317 11:08:26.471415    8508 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0317 11:08:26.471415    8508 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0317 11:08:26.472163    8508 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0317 11:08:26.472405    8508 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0317 11:08:26.477862    8508 out.go:235]   - Booting up control plane ...
	I0317 11:08:26.477887    8508 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0317 11:08:26.477887    8508 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0317 11:08:26.477887    8508 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0317 11:08:26.478628    8508 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0317 11:08:26.478960    8508 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0317 11:08:26.479042    8508 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0317 11:08:26.479496    8508 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0317 11:08:26.479599    8508 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0317 11:08:26.479599    8508 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001798459s
	I0317 11:08:26.479599    8508 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0317 11:08:26.480161    8508 kubeadm.go:310] [api-check] The API server is healthy after 8.502452388s
	I0317 11:08:26.480419    8508 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0317 11:08:26.480419    8508 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0317 11:08:26.480419    8508 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0317 11:08:26.480995    8508 kubeadm.go:310] [mark-control-plane] Marking the node ha-450500 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0317 11:08:26.481134    8508 kubeadm.go:310] [bootstrap-token] Using token: is9sac.0uzmczoyhbxhsua1
	I0317 11:08:26.499289    8508 out.go:235]   - Configuring RBAC rules ...
	I0317 11:08:26.500534    8508 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0317 11:08:26.500726    8508 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0317 11:08:26.501093    8508 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0317 11:08:26.501429    8508 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0317 11:08:26.501429    8508 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0317 11:08:26.502141    8508 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0317 11:08:26.502357    8508 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0317 11:08:26.502357    8508 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0317 11:08:26.502684    8508 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0317 11:08:26.502730    8508 kubeadm.go:310] 
	I0317 11:08:26.502730    8508 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0317 11:08:26.502730    8508 kubeadm.go:310] 
	I0317 11:08:26.502730    8508 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0317 11:08:26.502730    8508 kubeadm.go:310] 
	I0317 11:08:26.503261    8508 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0317 11:08:26.503397    8508 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0317 11:08:26.503508    8508 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0317 11:08:26.503508    8508 kubeadm.go:310] 
	I0317 11:08:26.503613    8508 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0317 11:08:26.503613    8508 kubeadm.go:310] 
	I0317 11:08:26.503737    8508 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0317 11:08:26.503737    8508 kubeadm.go:310] 
	I0317 11:08:26.503737    8508 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0317 11:08:26.503737    8508 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0317 11:08:26.503737    8508 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0317 11:08:26.503737    8508 kubeadm.go:310] 
	I0317 11:08:26.503737    8508 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0317 11:08:26.503737    8508 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0317 11:08:26.503737    8508 kubeadm.go:310] 
	I0317 11:08:26.503737    8508 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token is9sac.0uzmczoyhbxhsua1 \
	I0317 11:08:26.503737    8508 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c322b0259bb8a6b4c6c1dc77ade13bbf0d2f6b9bd2605c58fcd3743199330256 \
	I0317 11:08:26.503737    8508 kubeadm.go:310] 	--control-plane 
	I0317 11:08:26.503737    8508 kubeadm.go:310] 
	I0317 11:08:26.505311    8508 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0317 11:08:26.505311    8508 kubeadm.go:310] 
	I0317 11:08:26.505571    8508 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token is9sac.0uzmczoyhbxhsua1 \
	I0317 11:08:26.505571    8508 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c322b0259bb8a6b4c6c1dc77ade13bbf0d2f6b9bd2605c58fcd3743199330256 
	I0317 11:08:26.505571    8508 cni.go:84] Creating CNI manager for ""
	I0317 11:08:26.505571    8508 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0317 11:08:26.508839    8508 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0317 11:08:26.523761    8508 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0317 11:08:26.531405    8508 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0317 11:08:26.531405    8508 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0317 11:08:26.582932    8508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0317 11:08:27.384052    8508 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0317 11:08:27.398229    8508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:08:27.399238    8508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-450500 minikube.k8s.io/updated_at=2025_03_17T11_08_27_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=28b3ce799b018a38b7c40f89b465976263272e76 minikube.k8s.io/name=ha-450500 minikube.k8s.io/primary=true
	I0317 11:08:27.413886    8508 ops.go:34] apiserver oom_adj: -16
	I0317 11:08:27.607964    8508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:08:28.108863    8508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:08:28.606618    8508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:08:29.109015    8508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:08:29.607145    8508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:08:30.107300    8508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:08:30.225632    8508 kubeadm.go:1113] duration metric: took 2.841509s to wait for elevateKubeSystemPrivileges
	I0317 11:08:30.225632    8508 kubeadm.go:394] duration metric: took 19.299267s to StartCluster
	I0317 11:08:30.225632    8508 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:08:30.225632    8508 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0317 11:08:30.228894    8508 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:08:30.231546    8508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0317 11:08:30.231672    8508 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.25.16.34 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0317 11:08:30.231672    8508 start.go:241] waiting for startup goroutines ...
	I0317 11:08:30.231672    8508 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0317 11:08:30.231838    8508 addons.go:69] Setting storage-provisioner=true in profile "ha-450500"
	I0317 11:08:30.231893    8508 addons.go:69] Setting default-storageclass=true in profile "ha-450500"
	I0317 11:08:30.231947    8508 addons.go:238] Setting addon storage-provisioner=true in "ha-450500"
	I0317 11:08:30.231996    8508 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-450500"
	I0317 11:08:30.232059    8508 host.go:66] Checking if "ha-450500" exists ...
	I0317 11:08:30.232059    8508 config.go:182] Loaded profile config "ha-450500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0317 11:08:30.233192    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500 ).state
	I0317 11:08:30.233827    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500 ).state
	I0317 11:08:30.415922    8508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.25.16.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0317 11:08:30.929641    8508 start.go:971] {"host.minikube.internal": 172.25.16.1} host record injected into CoreDNS's ConfigMap
	I0317 11:08:32.576300    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:08:32.577291    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:08:32.580666    8508 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0317 11:08:32.582645    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:08:32.582645    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:08:32.583286    8508 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0317 11:08:32.583286    8508 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0317 11:08:32.583286    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500 ).state
	I0317 11:08:32.583924    8508 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0317 11:08:32.584929    8508 kapi.go:59] client config for ha-450500: &rest.Config{Host:"https://172.25.31.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-450500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-450500\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2e292e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0317 11:08:32.586383    8508 cert_rotation.go:140] Starting client certificate rotation controller
	I0317 11:08:32.586383    8508 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0317 11:08:32.586910    8508 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0317 11:08:32.586910    8508 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0317 11:08:32.586949    8508 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0317 11:08:32.587426    8508 addons.go:238] Setting addon default-storageclass=true in "ha-450500"
	I0317 11:08:32.587463    8508 host.go:66] Checking if "ha-450500" exists ...
	I0317 11:08:32.588904    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500 ).state
	I0317 11:08:34.974372    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:08:34.974372    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:08:34.974372    8508 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0317 11:08:34.974372    8508 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0317 11:08:34.974372    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500 ).state
	I0317 11:08:35.095004    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:08:35.095004    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:08:35.096158    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500 ).networkadapters[0]).ipaddresses[0]
	I0317 11:08:37.297268    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:08:37.297268    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:08:37.297268    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500 ).networkadapters[0]).ipaddresses[0]
	I0317 11:08:37.893766    8508 main.go:141] libmachine: [stdout =====>] : 172.25.16.34
	
	I0317 11:08:37.893830    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:08:37.893830    8508 sshutil.go:53] new ssh client: &{IP:172.25.16.34 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500\id_rsa Username:docker}
	I0317 11:08:38.061352    8508 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0317 11:08:40.047885    8508 main.go:141] libmachine: [stdout =====>] : 172.25.16.34
	
	I0317 11:08:40.047885    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:08:40.047885    8508 sshutil.go:53] new ssh client: &{IP:172.25.16.34 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500\id_rsa Username:docker}
	I0317 11:08:40.187407    8508 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0317 11:08:40.331042    8508 round_trippers.go:470] GET https://172.25.31.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0317 11:08:40.331042    8508 round_trippers.go:476] Request Headers:
	I0317 11:08:40.331042    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:08:40.331042    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:08:40.343618    8508 round_trippers.go:581] Response Status: 200 OK in 12 milliseconds
	I0317 11:08:40.343618    8508 round_trippers.go:470] PUT https://172.25.31.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0317 11:08:40.343618    8508 round_trippers.go:476] Request Headers:
	I0317 11:08:40.343618    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:08:40.343618    8508 round_trippers.go:480]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 11:08:40.343618    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:08:40.348206    8508 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 11:08:40.351924    8508 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0317 11:08:40.355964    8508 addons.go:514] duration metric: took 10.1242178s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0317 11:08:40.356178    8508 start.go:246] waiting for cluster config update ...
	I0317 11:08:40.356178    8508 start.go:255] writing updated cluster config ...
	I0317 11:08:40.359475    8508 out.go:201] 
	I0317 11:08:40.373501    8508 config.go:182] Loaded profile config "ha-450500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0317 11:08:40.373664    8508 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\config.json ...
	I0317 11:08:40.379719    8508 out.go:177] * Starting "ha-450500-m02" control-plane node in "ha-450500" cluster
	I0317 11:08:40.384727    8508 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0317 11:08:40.384727    8508 cache.go:56] Caching tarball of preloaded images
	I0317 11:08:40.384727    8508 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0317 11:08:40.384727    8508 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on docker
	I0317 11:08:40.384727    8508 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\config.json ...
	I0317 11:08:40.389763    8508 start.go:360] acquireMachinesLock for ha-450500-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0317 11:08:40.389763    8508 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-450500-m02"
	I0317 11:08:40.390757    8508 start.go:93] Provisioning new machine with config: &{Name:ha-450500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName
:ha-450500 Namespace:default APIServerHAVIP:172.25.31.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.16.34 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0317 11:08:40.390757    8508 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0317 11:08:40.398762    8508 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0317 11:08:40.398762    8508 start.go:159] libmachine.API.Create for "ha-450500" (driver="hyperv")
	I0317 11:08:40.398762    8508 client.go:168] LocalClient.Create starting
	I0317 11:08:40.398762    8508 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0317 11:08:40.399752    8508 main.go:141] libmachine: Decoding PEM data...
	I0317 11:08:40.399752    8508 main.go:141] libmachine: Parsing certificate...
	I0317 11:08:40.399752    8508 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0317 11:08:40.399752    8508 main.go:141] libmachine: Decoding PEM data...
	I0317 11:08:40.399752    8508 main.go:141] libmachine: Parsing certificate...
	I0317 11:08:40.399752    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0317 11:08:42.280274    8508 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0317 11:08:42.280559    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:08:42.280559    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0317 11:08:44.005478    8508 main.go:141] libmachine: [stdout =====>] : False
	
	I0317 11:08:44.005478    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:08:44.005478    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0317 11:08:45.497998    8508 main.go:141] libmachine: [stdout =====>] : True
	
	I0317 11:08:45.497998    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:08:45.498300    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0317 11:08:49.186567    8508 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0317 11:08:49.186567    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:08:49.189822    8508 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0317 11:08:49.727014    8508 main.go:141] libmachine: Creating SSH key...
	I0317 11:08:50.391236    8508 main.go:141] libmachine: Creating VM...
	I0317 11:08:50.391236    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0317 11:08:53.320458    8508 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0317 11:08:53.320458    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:08:53.320684    8508 main.go:141] libmachine: Using switch "Default Switch"
	I0317 11:08:53.320684    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0317 11:08:55.132226    8508 main.go:141] libmachine: [stdout =====>] : True
	
	I0317 11:08:55.132226    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:08:55.132226    8508 main.go:141] libmachine: Creating VHD
	I0317 11:08:55.132226    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0317 11:08:58.997547    8508 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 433DC87A-8DF4-4BBE-8DA4-9CCBCB4F2077
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0317 11:08:58.997622    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:08:58.997622    8508 main.go:141] libmachine: Writing magic tar header
	I0317 11:08:58.997697    8508 main.go:141] libmachine: Writing SSH key tar header
	I0317 11:08:59.010563    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0317 11:09:02.215149    8508 main.go:141] libmachine: [stdout =====>] : 
	I0317 11:09:02.215149    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:09:02.215417    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500-m02\disk.vhd' -SizeBytes 20000MB
	I0317 11:09:04.761419    8508 main.go:141] libmachine: [stdout =====>] : 
	I0317 11:09:04.761419    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:09:04.762289    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-450500-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0317 11:09:08.448421    8508 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-450500-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0317 11:09:08.448421    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:09:08.448979    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-450500-m02 -DynamicMemoryEnabled $false
	I0317 11:09:10.727631    8508 main.go:141] libmachine: [stdout =====>] : 
	I0317 11:09:10.728647    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:09:10.728766    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-450500-m02 -Count 2
	I0317 11:09:12.920580    8508 main.go:141] libmachine: [stdout =====>] : 
	I0317 11:09:12.921464    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:09:12.921464    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-450500-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500-m02\boot2docker.iso'
	I0317 11:09:15.546850    8508 main.go:141] libmachine: [stdout =====>] : 
	I0317 11:09:15.547848    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:09:15.547900    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-450500-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500-m02\disk.vhd'
	I0317 11:09:18.284116    8508 main.go:141] libmachine: [stdout =====>] : 
	I0317 11:09:18.284116    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:09:18.285005    8508 main.go:141] libmachine: Starting VM...
	I0317 11:09:18.285005    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-450500-m02
	I0317 11:09:21.463148    8508 main.go:141] libmachine: [stdout =====>] : 
	I0317 11:09:21.463148    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:09:21.463148    8508 main.go:141] libmachine: Waiting for host to start...
	I0317 11:09:21.463148    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m02 ).state
	I0317 11:09:23.824924    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:09:23.824989    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:09:23.825068    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 11:09:26.470325    8508 main.go:141] libmachine: [stdout =====>] : 
	I0317 11:09:26.470325    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:09:27.471278    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m02 ).state
	I0317 11:09:29.773843    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:09:29.773843    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:09:29.774365    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 11:09:32.389518    8508 main.go:141] libmachine: [stdout =====>] : 
	I0317 11:09:32.390514    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:09:33.391692    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m02 ).state
	I0317 11:09:35.611106    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:09:35.611709    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:09:35.611709    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 11:09:38.232452    8508 main.go:141] libmachine: [stdout =====>] : 
	I0317 11:09:38.232452    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:09:39.233263    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m02 ).state
	I0317 11:09:41.477347    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:09:41.477347    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:09:41.477347    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 11:09:44.078632    8508 main.go:141] libmachine: [stdout =====>] : 
	I0317 11:09:44.078632    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:09:45.079740    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m02 ).state
	I0317 11:09:47.325663    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:09:47.325663    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:09:47.325663    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 11:09:49.964742    8508 main.go:141] libmachine: [stdout =====>] : 172.25.21.189
	
	I0317 11:09:49.964742    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:09:49.965463    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m02 ).state
	I0317 11:09:52.118206    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:09:52.119001    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:09:52.119001    8508 machine.go:93] provisionDockerMachine start ...
	I0317 11:09:52.119145    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m02 ).state
	I0317 11:09:54.297468    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:09:54.297468    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:09:54.298262    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 11:09:56.867767    8508 main.go:141] libmachine: [stdout =====>] : 172.25.21.189
	
	I0317 11:09:56.867767    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:09:56.874060    8508 main.go:141] libmachine: Using SSH client type: native
	I0317 11:09:56.889966    8508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.21.189 22 <nil> <nil>}
	I0317 11:09:56.890108    8508 main.go:141] libmachine: About to run SSH command:
	hostname
	I0317 11:09:57.025425    8508 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0317 11:09:57.025535    8508 buildroot.go:166] provisioning hostname "ha-450500-m02"
	I0317 11:09:57.025535    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m02 ).state
	I0317 11:09:59.150822    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:09:59.151654    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:09:59.151817    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 11:10:01.694717    8508 main.go:141] libmachine: [stdout =====>] : 172.25.21.189
	
	I0317 11:10:01.694717    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:10:01.700683    8508 main.go:141] libmachine: Using SSH client type: native
	I0317 11:10:01.701352    8508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.21.189 22 <nil> <nil>}
	I0317 11:10:01.701352    8508 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-450500-m02 && echo "ha-450500-m02" | sudo tee /etc/hostname
	I0317 11:10:01.871427    8508 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-450500-m02
	
	I0317 11:10:01.872028    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m02 ).state
	I0317 11:10:03.997693    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:10:03.997693    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:10:03.998030    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 11:10:06.544339    8508 main.go:141] libmachine: [stdout =====>] : 172.25.21.189
	
	I0317 11:10:06.545038    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:10:06.550986    8508 main.go:141] libmachine: Using SSH client type: native
	I0317 11:10:06.551323    8508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.21.189 22 <nil> <nil>}
	I0317 11:10:06.551323    8508 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-450500-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-450500-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-450500-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0317 11:10:06.700282    8508 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0317 11:10:06.700391    8508 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0317 11:10:06.700391    8508 buildroot.go:174] setting up certificates
	I0317 11:10:06.700391    8508 provision.go:84] configureAuth start
	I0317 11:10:06.700494    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m02 ).state
	I0317 11:10:08.844761    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:10:08.844820    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:10:08.844820    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 11:10:11.404801    8508 main.go:141] libmachine: [stdout =====>] : 172.25.21.189
	
	I0317 11:10:11.404801    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:10:11.404801    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m02 ).state
	I0317 11:10:13.535299    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:10:13.535600    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:10:13.535730    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 11:10:16.079207    8508 main.go:141] libmachine: [stdout =====>] : 172.25.21.189
	
	I0317 11:10:16.079608    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:10:16.079889    8508 provision.go:143] copyHostCerts
	I0317 11:10:16.079889    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0317 11:10:16.079889    8508 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0317 11:10:16.079889    8508 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0317 11:10:16.080685    8508 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0317 11:10:16.082381    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0317 11:10:16.082381    8508 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0317 11:10:16.082381    8508 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0317 11:10:16.082972    8508 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0317 11:10:16.084083    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0317 11:10:16.084241    8508 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0317 11:10:16.084241    8508 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0317 11:10:16.084769    8508 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0317 11:10:16.085470    8508 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-450500-m02 san=[127.0.0.1 172.25.21.189 ha-450500-m02 localhost minikube]
	I0317 11:10:16.347143    8508 provision.go:177] copyRemoteCerts
	I0317 11:10:16.357740    8508 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0317 11:10:16.357740    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m02 ).state
	I0317 11:10:18.511269    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:10:18.511269    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:10:18.511703    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 11:10:21.098129    8508 main.go:141] libmachine: [stdout =====>] : 172.25.21.189
	
	I0317 11:10:21.098129    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:10:21.098758    8508 sshutil.go:53] new ssh client: &{IP:172.25.21.189 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500-m02\id_rsa Username:docker}
	I0317 11:10:21.218417    8508 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8606408s)
	I0317 11:10:21.219150    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0317 11:10:21.219673    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0317 11:10:21.267931    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0317 11:10:21.268087    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0317 11:10:21.315290    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0317 11:10:21.315780    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0317 11:10:21.361115    8508 provision.go:87] duration metric: took 14.6606167s to configureAuth
	I0317 11:10:21.361185    8508 buildroot.go:189] setting minikube options for container-runtime
	I0317 11:10:21.361961    8508 config.go:182] Loaded profile config "ha-450500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0317 11:10:21.362239    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m02 ).state
	I0317 11:10:23.477735    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:10:23.477735    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:10:23.477977    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 11:10:25.987084    8508 main.go:141] libmachine: [stdout =====>] : 172.25.21.189
	
	I0317 11:10:25.988072    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:10:25.992687    8508 main.go:141] libmachine: Using SSH client type: native
	I0317 11:10:25.993434    8508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.21.189 22 <nil> <nil>}
	I0317 11:10:25.993504    8508 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0317 11:10:26.143222    8508 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0317 11:10:26.143292    8508 buildroot.go:70] root file system type: tmpfs
	I0317 11:10:26.143486    8508 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0317 11:10:26.143574    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m02 ).state
	I0317 11:10:28.309612    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:10:28.309612    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:10:28.310386    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 11:10:30.898185    8508 main.go:141] libmachine: [stdout =====>] : 172.25.21.189
	
	I0317 11:10:30.898185    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:10:30.904890    8508 main.go:141] libmachine: Using SSH client type: native
	I0317 11:10:30.905640    8508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.21.189 22 <nil> <nil>}
	I0317 11:10:30.905640    8508 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.25.16.34"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0317 11:10:31.078166    8508 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.25.16.34
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0317 11:10:31.078166    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m02 ).state
	I0317 11:10:33.244772    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:10:33.245420    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:10:33.245566    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 11:10:35.795882    8508 main.go:141] libmachine: [stdout =====>] : 172.25.21.189
	
	I0317 11:10:35.795882    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:10:35.800061    8508 main.go:141] libmachine: Using SSH client type: native
	I0317 11:10:35.800835    8508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.21.189 22 <nil> <nil>}
	I0317 11:10:35.800835    8508 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0317 11:10:38.093045    8508 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0317 11:10:38.093128    8508 machine.go:96] duration metric: took 45.9737901s to provisionDockerMachine
	I0317 11:10:38.093128    8508 client.go:171] duration metric: took 1m57.6935059s to LocalClient.Create
	I0317 11:10:38.093128    8508 start.go:167] duration metric: took 1m57.6935059s to libmachine.API.Create "ha-450500"
	I0317 11:10:38.093128    8508 start.go:293] postStartSetup for "ha-450500-m02" (driver="hyperv")
	I0317 11:10:38.093262    8508 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0317 11:10:38.105541    8508 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0317 11:10:38.105541    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m02 ).state
	I0317 11:10:40.334763    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:10:40.334763    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:10:40.334763    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 11:10:42.858740    8508 main.go:141] libmachine: [stdout =====>] : 172.25.21.189
	
	I0317 11:10:42.858740    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:10:42.859553    8508 sshutil.go:53] new ssh client: &{IP:172.25.21.189 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500-m02\id_rsa Username:docker}
	I0317 11:10:42.978688    8508 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8731109s)
	I0317 11:10:42.991063    8508 ssh_runner.go:195] Run: cat /etc/os-release
	I0317 11:10:42.997925    8508 info.go:137] Remote host: Buildroot 2023.02.9
	I0317 11:10:42.997925    8508 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0317 11:10:42.998421    8508 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0317 11:10:42.999417    8508 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem -> 89402.pem in /etc/ssl/certs
	I0317 11:10:42.999481    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem -> /etc/ssl/certs/89402.pem
	I0317 11:10:43.010619    8508 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0317 11:10:43.031199    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem --> /etc/ssl/certs/89402.pem (1708 bytes)
	I0317 11:10:43.080473    8508 start.go:296] duration metric: took 4.9871982s for postStartSetup
	I0317 11:10:43.083859    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m02 ).state
	I0317 11:10:45.210865    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:10:45.210865    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:10:45.211441    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 11:10:47.733190    8508 main.go:141] libmachine: [stdout =====>] : 172.25.21.189
	
	I0317 11:10:47.733733    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:10:47.734026    8508 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\config.json ...
	I0317 11:10:47.736302    8508 start.go:128] duration metric: took 2m7.3446128s to createHost
	I0317 11:10:47.736302    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m02 ).state
	I0317 11:10:49.863250    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:10:49.863410    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:10:49.863410    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 11:10:52.465464    8508 main.go:141] libmachine: [stdout =====>] : 172.25.21.189
	
	I0317 11:10:52.465464    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:10:52.472121    8508 main.go:141] libmachine: Using SSH client type: native
	I0317 11:10:52.472839    8508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.21.189 22 <nil> <nil>}
	I0317 11:10:52.472839    8508 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0317 11:10:52.620917    8508 main.go:141] libmachine: SSH cmd err, output: <nil>: 1742209852.642347342
	
	I0317 11:10:52.620917    8508 fix.go:216] guest clock: 1742209852.642347342
	I0317 11:10:52.620917    8508 fix.go:229] Guest: 2025-03-17 11:10:52.642347342 +0000 UTC Remote: 2025-03-17 11:10:47.7363023 +0000 UTC m=+331.187404701 (delta=4.906045042s)
	I0317 11:10:52.621459    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m02 ).state
	I0317 11:10:54.750504    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:10:54.750707    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:10:54.750784    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 11:10:57.317146    8508 main.go:141] libmachine: [stdout =====>] : 172.25.21.189
	
	I0317 11:10:57.318125    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:10:57.324084    8508 main.go:141] libmachine: Using SSH client type: native
	I0317 11:10:57.324902    8508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.21.189 22 <nil> <nil>}
	I0317 11:10:57.324902    8508 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1742209852
	I0317 11:10:57.486424    8508 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Mar 17 11:10:52 UTC 2025
	
	I0317 11:10:57.486424    8508 fix.go:236] clock set: Mon Mar 17 11:10:52 UTC 2025
	 (err=<nil>)
	I0317 11:10:57.486424    8508 start.go:83] releasing machines lock for "ha-450500-m02", held for 2m17.0956571s
	I0317 11:10:57.486424    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m02 ).state
	I0317 11:10:59.617449    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:10:59.618417    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:10:59.618559    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 11:11:02.354004    8508 main.go:141] libmachine: [stdout =====>] : 172.25.21.189
	
	I0317 11:11:02.354744    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:11:02.363332    8508 out.go:177] * Found network options:
	I0317 11:11:02.367099    8508 out.go:177]   - NO_PROXY=172.25.16.34
	W0317 11:11:02.370435    8508 proxy.go:119] fail to check proxy env: Error ip not in block
	I0317 11:11:02.373194    8508 out.go:177]   - NO_PROXY=172.25.16.34
	W0317 11:11:02.375727    8508 proxy.go:119] fail to check proxy env: Error ip not in block
	W0317 11:11:02.377217    8508 proxy.go:119] fail to check proxy env: Error ip not in block
	I0317 11:11:02.379815    8508 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0317 11:11:02.379815    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m02 ).state
	I0317 11:11:02.392967    8508 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0317 11:11:02.392967    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m02 ).state
	I0317 11:11:04.732836    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:11:04.732836    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:11:04.732836    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 11:11:04.732836    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:11:04.732836    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:11:04.732836    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 11:11:07.513459    8508 main.go:141] libmachine: [stdout =====>] : 172.25.21.189
	
	I0317 11:11:07.513765    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:11:07.513765    8508 sshutil.go:53] new ssh client: &{IP:172.25.21.189 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500-m02\id_rsa Username:docker}
	I0317 11:11:07.535407    8508 main.go:141] libmachine: [stdout =====>] : 172.25.21.189
	
	I0317 11:11:07.535464    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:11:07.536241    8508 sshutil.go:53] new ssh client: &{IP:172.25.21.189 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500-m02\id_rsa Username:docker}
	I0317 11:11:07.611515    8508 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.231605s)
	W0317 11:11:07.611589    8508 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0317 11:11:07.630055    8508 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.2370487s)
	W0317 11:11:07.630055    8508 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0317 11:11:07.642471    8508 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0317 11:11:07.675231    8508 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0317 11:11:07.675231    8508 start.go:495] detecting cgroup driver to use...
	I0317 11:11:07.675231    8508 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0317 11:11:07.722168    8508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0317 11:11:07.754558    8508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W0317 11:11:07.755586    8508 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0317 11:11:07.755586    8508 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0317 11:11:07.775578    8508 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0317 11:11:07.786068    8508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0317 11:11:07.815746    8508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0317 11:11:07.849582    8508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0317 11:11:07.879688    8508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0317 11:11:07.909914    8508 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0317 11:11:07.941343    8508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0317 11:11:07.973124    8508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0317 11:11:08.006601    8508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0317 11:11:08.036154    8508 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0317 11:11:08.054170    8508 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0317 11:11:08.065562    8508 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0317 11:11:08.102038    8508 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0317 11:11:08.139383    8508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 11:11:08.336947    8508 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0317 11:11:08.373542    8508 start.go:495] detecting cgroup driver to use...
	I0317 11:11:08.387701    8508 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0317 11:11:08.427461    8508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0317 11:11:08.459268    8508 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0317 11:11:08.500892    8508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0317 11:11:08.538038    8508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0317 11:11:08.577227    8508 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0317 11:11:08.647219    8508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0317 11:11:08.674116    8508 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0317 11:11:08.724662    8508 ssh_runner.go:195] Run: which cri-dockerd
	I0317 11:11:08.740690    8508 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0317 11:11:08.756236    8508 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0317 11:11:08.799521    8508 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0317 11:11:09.002660    8508 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0317 11:11:09.194240    8508 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0317 11:11:09.194320    8508 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0317 11:11:09.239817    8508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 11:11:09.443257    8508 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0317 11:11:12.046877    8508 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6036s)
	I0317 11:11:12.058868    8508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0317 11:11:12.098701    8508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0317 11:11:12.141482    8508 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0317 11:11:12.339695    8508 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0317 11:11:12.551321    8508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 11:11:12.754154    8508 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0317 11:11:12.794725    8508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0317 11:11:12.829972    8508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 11:11:13.035377    8508 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0317 11:11:13.145802    8508 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0317 11:11:13.157487    8508 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0317 11:11:13.166241    8508 start.go:563] Will wait 60s for crictl version
	I0317 11:11:13.179118    8508 ssh_runner.go:195] Run: which crictl
	I0317 11:11:13.199444    8508 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0317 11:11:13.264554    8508 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0317 11:11:13.275164    8508 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0317 11:11:13.323695    8508 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0317 11:11:13.376760    8508 out.go:235] * Preparing Kubernetes v1.32.2 on Docker 27.4.0 ...
	I0317 11:11:13.380719    8508 out.go:177]   - env NO_PROXY=172.25.16.34
	I0317 11:11:13.383197    8508 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0317 11:11:13.386693    8508 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0317 11:11:13.386693    8508 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0317 11:11:13.386693    8508 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0317 11:11:13.386693    8508 ip.go:211] Found interface: {Index:13 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:4b:84:d5 Flags:up|broadcast|multicast|running}
	I0317 11:11:13.389636    8508 ip.go:214] interface addr: fe80::f0c7:c31c:6237:ef35/64
	I0317 11:11:13.389636    8508 ip.go:214] interface addr: 172.25.16.1/20
	I0317 11:11:13.402844    8508 ssh_runner.go:195] Run: grep 172.25.16.1	host.minikube.internal$ /etc/hosts
	I0317 11:11:13.409210    8508 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.25.16.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 11:11:13.436734    8508 mustload.go:65] Loading cluster: ha-450500
	I0317 11:11:13.437452    8508 config.go:182] Loaded profile config "ha-450500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0317 11:11:13.437724    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500 ).state
	I0317 11:11:15.584271    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:11:15.584271    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:11:15.584271    8508 host.go:66] Checking if "ha-450500" exists ...
	I0317 11:11:15.585455    8508 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500 for IP: 172.25.21.189
	I0317 11:11:15.585513    8508 certs.go:194] generating shared ca certs ...
	I0317 11:11:15.585540    8508 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:11:15.586118    8508 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0317 11:11:15.586473    8508 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0317 11:11:15.586473    8508 certs.go:256] generating profile certs ...
	I0317 11:11:15.587438    8508 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\client.key
	I0317 11:11:15.587438    8508 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.key.b5f43119
	I0317 11:11:15.587438    8508 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.crt.b5f43119 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.25.16.34 172.25.21.189 172.25.31.254]
	I0317 11:11:15.855076    8508 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.crt.b5f43119 ...
	I0317 11:11:15.855076    8508 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.crt.b5f43119: {Name:mk30b3f325c53c61260398379690859ae7d2df8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:11:15.857179    8508 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.key.b5f43119 ...
	I0317 11:11:15.857179    8508 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.key.b5f43119: {Name:mke75f3701be7cd8ecc8e9e9772462479c9067b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:11:15.858609    8508 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.crt.b5f43119 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.crt
	I0317 11:11:15.873747    8508 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.key.b5f43119 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.key
	I0317 11:11:15.875408    8508 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\proxy-client.key
	I0317 11:11:15.875408    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0317 11:11:15.876100    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0317 11:11:15.876279    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0317 11:11:15.876507    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0317 11:11:15.876507    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0317 11:11:15.876507    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0317 11:11:15.876507    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0317 11:11:15.876507    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0317 11:11:15.877837    8508 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8940.pem (1338 bytes)
	W0317 11:11:15.878241    8508 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8940_empty.pem, impossibly tiny 0 bytes
	I0317 11:11:15.878706    8508 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0317 11:11:15.879330    8508 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0317 11:11:15.879682    8508 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0317 11:11:15.879682    8508 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0317 11:11:15.880493    8508 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem (1708 bytes)
	I0317 11:11:15.880795    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0317 11:11:15.880975    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8940.pem -> /usr/share/ca-certificates/8940.pem
	I0317 11:11:15.881248    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem -> /usr/share/ca-certificates/89402.pem
	I0317 11:11:15.881490    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500 ).state
	I0317 11:11:18.065942    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:11:18.066783    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:11:18.066783    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500 ).networkadapters[0]).ipaddresses[0]
	I0317 11:11:20.653806    8508 main.go:141] libmachine: [stdout =====>] : 172.25.16.34
	
	I0317 11:11:20.653806    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:11:20.654801    8508 sshutil.go:53] new ssh client: &{IP:172.25.16.34 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500\id_rsa Username:docker}
	I0317 11:11:20.763620    8508 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0317 11:11:20.775877    8508 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0317 11:11:20.811566    8508 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0317 11:11:20.819881    8508 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0317 11:11:20.848832    8508 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0317 11:11:20.856700    8508 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0317 11:11:20.889393    8508 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0317 11:11:20.899897    8508 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0317 11:11:20.937787    8508 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0317 11:11:20.943715    8508 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0317 11:11:20.978499    8508 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0317 11:11:20.987288    8508 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0317 11:11:21.007129    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0317 11:11:21.059138    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0317 11:11:21.114696    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0317 11:11:21.162731    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0317 11:11:21.223677    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0317 11:11:21.277851    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0317 11:11:21.329299    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0317 11:11:21.378831    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0317 11:11:21.424582    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0317 11:11:21.473365    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8940.pem --> /usr/share/ca-certificates/8940.pem (1338 bytes)
	I0317 11:11:21.522141    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem --> /usr/share/ca-certificates/89402.pem (1708 bytes)
	I0317 11:11:21.572096    8508 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0317 11:11:21.605651    8508 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0317 11:11:21.638656    8508 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0317 11:11:21.672872    8508 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0317 11:11:21.707341    8508 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0317 11:11:21.739400    8508 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0317 11:11:21.768609    8508 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0317 11:11:21.816789    8508 ssh_runner.go:195] Run: openssl version
	I0317 11:11:21.836412    8508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0317 11:11:21.866874    8508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0317 11:11:21.873633    8508 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 17 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0317 11:11:21.885249    8508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0317 11:11:21.904629    8508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0317 11:11:21.934812    8508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8940.pem && ln -fs /usr/share/ca-certificates/8940.pem /etc/ssl/certs/8940.pem"
	I0317 11:11:21.964783    8508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8940.pem
	I0317 11:11:21.972542    8508 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 17 10:46 /usr/share/ca-certificates/8940.pem
	I0317 11:11:21.983264    8508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8940.pem
	I0317 11:11:22.003704    8508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8940.pem /etc/ssl/certs/51391683.0"
	I0317 11:11:22.037715    8508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/89402.pem && ln -fs /usr/share/ca-certificates/89402.pem /etc/ssl/certs/89402.pem"
	I0317 11:11:22.069329    8508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/89402.pem
	I0317 11:11:22.075724    8508 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 17 10:46 /usr/share/ca-certificates/89402.pem
	I0317 11:11:22.086532    8508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/89402.pem
	I0317 11:11:22.106439    8508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/89402.pem /etc/ssl/certs/3ec20f2e.0"
	I0317 11:11:22.136854    8508 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0317 11:11:22.143232    8508 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0317 11:11:22.143699    8508 kubeadm.go:934] updating node {m02 172.25.21.189 8443 v1.32.2 docker true true} ...
	I0317 11:11:22.143926    8508 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-450500-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.21.189
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:ha-450500 Namespace:default APIServerHAVIP:172.25.31.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0317 11:11:22.143978    8508 kube-vip.go:115] generating kube-vip config ...
	I0317 11:11:22.155614    8508 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0317 11:11:22.187166    8508 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0317 11:11:22.187373    8508 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.25.31.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.9
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0317 11:11:22.199788    8508 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0317 11:11:22.218340    8508 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.32.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.32.2': No such file or directory
	
	Initiating transfer...
	I0317 11:11:22.230705    8508 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.32.2
	I0317 11:11:22.256199    8508 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubectl
	I0317 11:11:22.256266    8508 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubeadm
	I0317 11:11:22.256373    8508 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubelet
	I0317 11:11:23.751426    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubectl -> /var/lib/minikube/binaries/v1.32.2/kubectl
	I0317 11:11:23.760500    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubeadm -> /var/lib/minikube/binaries/v1.32.2/kubeadm
	I0317 11:11:23.764128    8508 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl
	I0317 11:11:23.771177    8508 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubeadm
	I0317 11:11:23.772199    8508 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubectl': No such file or directory
	I0317 11:11:23.772199    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubectl --> /var/lib/minikube/binaries/v1.32.2/kubectl (57323672 bytes)
	I0317 11:11:23.786952    8508 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubeadm': No such file or directory
	I0317 11:11:23.787255    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubeadm --> /var/lib/minikube/binaries/v1.32.2/kubeadm (70942872 bytes)
	I0317 11:11:24.056023    8508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0317 11:11:24.100011    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubelet -> /var/lib/minikube/binaries/v1.32.2/kubelet
	I0317 11:11:24.111597    8508 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet
	I0317 11:11:24.139642    8508 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubelet': No such file or directory
	I0317 11:11:24.139642    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubelet --> /var/lib/minikube/binaries/v1.32.2/kubelet (77406468 bytes)
	I0317 11:11:25.038725    8508 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0317 11:11:25.059333    8508 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0317 11:11:25.090293    8508 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0317 11:11:25.125828    8508 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0317 11:11:25.172997    8508 ssh_runner.go:195] Run: grep 172.25.31.254	control-plane.minikube.internal$ /etc/hosts
	I0317 11:11:25.180226    8508 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.25.31.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 11:11:25.216634    8508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 11:11:25.430747    8508 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 11:11:25.462858    8508 host.go:66] Checking if "ha-450500" exists ...
	I0317 11:11:25.463686    8508 start.go:317] joinCluster: &{Name:ha-450500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-450500 Namespace:def
ault APIServerHAVIP:172.25.31.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.16.34 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.25.21.189 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false M
ountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 11:11:25.463686    8508 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0317 11:11:25.463686    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500 ).state
	I0317 11:11:27.640742    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:11:27.641799    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:11:27.641850    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500 ).networkadapters[0]).ipaddresses[0]
	I0317 11:11:30.303419    8508 main.go:141] libmachine: [stdout =====>] : 172.25.16.34
	
	I0317 11:11:30.303419    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:11:30.304323    8508 sshutil.go:53] new ssh client: &{IP:172.25.16.34 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500\id_rsa Username:docker}
	I0317 11:11:30.822626    8508 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm token create --print-join-command --ttl=0": (5.3588263s)
	I0317 11:11:30.822682    8508 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.25.21.189 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0317 11:11:30.822807    8508 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vn1ehv.5h9d51qftui03qsu --discovery-token-ca-cert-hash sha256:c322b0259bb8a6b4c6c1dc77ade13bbf0d2f6b9bd2605c58fcd3743199330256 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-450500-m02 --control-plane --apiserver-advertise-address=172.25.21.189 --apiserver-bind-port=8443"
	I0317 11:12:10.937739    8508 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vn1ehv.5h9d51qftui03qsu --discovery-token-ca-cert-hash sha256:c322b0259bb8a6b4c6c1dc77ade13bbf0d2f6b9bd2605c58fcd3743199330256 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-450500-m02 --control-plane --apiserver-advertise-address=172.25.21.189 --apiserver-bind-port=8443": (40.1146327s)
	I0317 11:12:10.937739    8508 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0317 11:12:11.686764    8508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-450500-m02 minikube.k8s.io/updated_at=2025_03_17T11_12_11_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=28b3ce799b018a38b7c40f89b465976263272e76 minikube.k8s.io/name=ha-450500 minikube.k8s.io/primary=false
	I0317 11:12:11.909051    8508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-450500-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0317 11:12:12.107018    8508 start.go:319] duration metric: took 46.6429838s to joinCluster
	I0317 11:12:12.107252    8508 start.go:235] Will wait 6m0s for node &{Name:m02 IP:172.25.21.189 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0317 11:12:12.107936    8508 config.go:182] Loaded profile config "ha-450500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0317 11:12:12.111193    8508 out.go:177] * Verifying Kubernetes components...
	I0317 11:12:12.127426    8508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 11:12:12.513757    8508 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 11:12:12.552255    8508 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0317 11:12:12.552786    8508 kapi.go:59] client config for ha-450500: &rest.Config{Host:"https://172.25.31.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-450500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-450500\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2e292e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0317 11:12:12.552786    8508 kubeadm.go:483] Overriding stale ClientConfig host https://172.25.31.254:8443 with https://172.25.16.34:8443
	I0317 11:12:12.553861    8508 node_ready.go:35] waiting up to 6m0s for node "ha-450500-m02" to be "Ready" ...
	I0317 11:12:12.554073    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:12.554120    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:12.554120    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:12.554120    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:12.574739    8508 round_trippers.go:581] Response Status: 200 OK in 20 milliseconds
	I0317 11:12:13.055100    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:13.055100    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:13.055100    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:13.055100    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:13.063791    8508 round_trippers.go:581] Response Status: 200 OK in 8 milliseconds
	I0317 11:12:13.554915    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:13.554915    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:13.554915    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:13.554915    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:13.563528    8508 round_trippers.go:581] Response Status: 200 OK in 8 milliseconds
	I0317 11:12:14.054350    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:14.054350    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:14.054350    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:14.054350    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:14.060569    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:12:14.555403    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:14.555403    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:14.555525    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:14.555525    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:14.561945    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:12:14.562472    8508 node_ready.go:53] node "ha-450500-m02" has status "Ready":"False"
	I0317 11:12:15.054522    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:15.054769    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:15.054769    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:15.054828    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:15.061105    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:12:15.554963    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:15.554963    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:15.554963    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:15.554963    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:15.560978    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:12:16.056259    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:16.056259    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:16.056259    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:16.056259    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:16.064445    8508 round_trippers.go:581] Response Status: 200 OK in 8 milliseconds
	I0317 11:12:16.554996    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:16.554996    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:16.554996    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:16.554996    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:16.562633    8508 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0317 11:12:16.562766    8508 node_ready.go:53] node "ha-450500-m02" has status "Ready":"False"
	I0317 11:12:17.055145    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:17.055214    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:17.055307    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:17.055307    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:17.472772    8508 round_trippers.go:581] Response Status: 200 OK in 417 milliseconds
	I0317 11:12:17.554591    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:17.554591    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:17.554591    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:17.554591    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:17.563882    8508 round_trippers.go:581] Response Status: 200 OK in 9 milliseconds
	I0317 11:12:18.054353    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:18.054353    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:18.054353    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:18.054353    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:18.059804    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:12:18.554990    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:18.554990    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:18.554990    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:18.555060    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:18.560461    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:12:19.055203    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:19.055265    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:19.055265    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:19.055265    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:19.063703    8508 round_trippers.go:581] Response Status: 200 OK in 8 milliseconds
	I0317 11:12:19.064111    8508 node_ready.go:53] node "ha-450500-m02" has status "Ready":"False"
	I0317 11:12:19.555260    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:19.555260    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:19.555260    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:19.555260    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:19.567376    8508 round_trippers.go:581] Response Status: 200 OK in 12 milliseconds
	I0317 11:12:20.054206    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:20.054206    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:20.054206    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:20.054206    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:20.069649    8508 round_trippers.go:581] Response Status: 200 OK in 15 milliseconds
	I0317 11:12:20.555136    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:20.555136    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:20.555136    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:20.555235    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:20.559309    8508 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 11:12:21.055035    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:21.055035    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:21.055035    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:21.055035    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:21.061059    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:12:21.554986    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:21.554986    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:21.554986    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:21.554986    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:21.561654    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:12:21.561654    8508 node_ready.go:53] node "ha-450500-m02" has status "Ready":"False"
	I0317 11:12:22.055151    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:22.055151    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:22.055151    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:22.055151    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:22.061716    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:12:22.554919    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:22.554919    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:22.554919    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:22.554919    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:22.561786    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:12:23.055543    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:23.055621    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:23.055686    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:23.055686    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:23.060939    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:12:23.554577    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:23.554577    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:23.554577    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:23.554577    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:23.559823    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:12:24.055284    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:24.055284    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:24.055284    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:24.055284    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:24.061548    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:12:24.061973    8508 node_ready.go:53] node "ha-450500-m02" has status "Ready":"False"
	I0317 11:12:24.554517    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:24.554603    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:24.554603    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:24.554721    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:24.561374    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:12:25.056407    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:25.056475    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:25.056475    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:25.056475    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:25.061740    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:12:25.555351    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:25.555408    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:25.555463    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:25.555463    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:25.569896    8508 round_trippers.go:581] Response Status: 200 OK in 14 milliseconds
	I0317 11:12:26.054220    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:26.054220    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:26.054220    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:26.054220    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:26.059725    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:12:26.554488    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:26.554488    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:26.554488    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:26.554488    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:26.560770    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:12:26.560770    8508 node_ready.go:53] node "ha-450500-m02" has status "Ready":"False"
	I0317 11:12:27.054561    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:27.054561    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:27.054561    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:27.054561    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:27.063034    8508 round_trippers.go:581] Response Status: 200 OK in 8 milliseconds
	I0317 11:12:27.555207    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:27.555207    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:27.555207    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:27.555324    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:27.561216    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:12:28.054424    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:28.054424    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:28.054424    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:28.054424    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:28.064498    8508 round_trippers.go:581] Response Status: 200 OK in 10 milliseconds
	I0317 11:12:28.555572    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:28.555572    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:28.555572    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:28.555664    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:28.560500    8508 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 11:12:28.561108    8508 node_ready.go:53] node "ha-450500-m02" has status "Ready":"False"
	I0317 11:12:29.054141    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:29.054141    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:29.054141    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:29.054141    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:29.060995    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:12:29.555097    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:29.555097    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:29.555097    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:29.555097    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:29.560589    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:12:30.055344    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:30.055344    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:30.055344    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:30.055344    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:30.060751    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:12:30.555767    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:30.555767    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:30.555767    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:30.555767    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:30.563343    8508 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0317 11:12:30.564064    8508 node_ready.go:53] node "ha-450500-m02" has status "Ready":"False"
	I0317 11:12:31.054839    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:31.054839    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:31.054839    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:31.054839    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:31.061108    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:12:31.554153    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:31.554153    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:31.554153    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:31.554153    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:31.560762    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:12:32.054846    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:32.054846    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:32.054846    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:32.054846    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:32.061166    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:12:32.554290    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:32.554290    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:32.554290    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:32.554290    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:32.559868    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:12:33.054691    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:33.054691    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:33.054885    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:33.054885    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:33.059436    8508 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 11:12:33.059436    8508 node_ready.go:53] node "ha-450500-m02" has status "Ready":"False"
	I0317 11:12:33.554330    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:33.554330    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:33.554330    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:33.554330    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:33.559415    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:12:33.559529    8508 node_ready.go:49] node "ha-450500-m02" has status "Ready":"True"
	I0317 11:12:33.559529    8508 node_ready.go:38] duration metric: took 21.0055102s for node "ha-450500-m02" to be "Ready" ...
	I0317 11:12:33.559529    8508 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 11:12:33.559529    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods
	I0317 11:12:33.560075    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:33.560075    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:33.560075    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:33.564508    8508 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 11:12:33.567369    8508 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-qd2nj" in "kube-system" namespace to be "Ready" ...
	I0317 11:12:33.567369    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-qd2nj
	I0317 11:12:33.567614    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:33.567614    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:33.567646    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:33.571921    8508 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 11:12:33.571921    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500
	I0317 11:12:33.571921    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:33.571921    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:33.571921    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:33.576801    8508 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 11:12:33.577092    8508 pod_ready.go:93] pod "coredns-668d6bf9bc-qd2nj" in "kube-system" namespace has status "Ready":"True"
	I0317 11:12:33.577092    8508 pod_ready.go:82] duration metric: took 9.723ms for pod "coredns-668d6bf9bc-qd2nj" in "kube-system" namespace to be "Ready" ...
	I0317 11:12:33.577092    8508 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-rhhkv" in "kube-system" namespace to be "Ready" ...
	I0317 11:12:33.577307    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-rhhkv
	I0317 11:12:33.577307    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:33.577307    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:33.577307    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:33.583431    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:12:33.583464    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500
	I0317 11:12:33.583464    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:33.583464    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:33.583464    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:33.587422    8508 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0317 11:12:33.588494    8508 pod_ready.go:93] pod "coredns-668d6bf9bc-rhhkv" in "kube-system" namespace has status "Ready":"True"
	I0317 11:12:33.588531    8508 pod_ready.go:82] duration metric: took 11.4387ms for pod "coredns-668d6bf9bc-rhhkv" in "kube-system" namespace to be "Ready" ...
	I0317 11:12:33.588531    8508 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-450500" in "kube-system" namespace to be "Ready" ...
	I0317 11:12:33.588712    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/etcd-ha-450500
	I0317 11:12:33.588712    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:33.588712    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:33.588712    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:33.592407    8508 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0317 11:12:33.592819    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500
	I0317 11:12:33.592848    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:33.592848    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:33.592848    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:33.597081    8508 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 11:12:33.597081    8508 pod_ready.go:93] pod "etcd-ha-450500" in "kube-system" namespace has status "Ready":"True"
	I0317 11:12:33.597081    8508 pod_ready.go:82] duration metric: took 8.5033ms for pod "etcd-ha-450500" in "kube-system" namespace to be "Ready" ...
	I0317 11:12:33.597081    8508 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-450500-m02" in "kube-system" namespace to be "Ready" ...
	I0317 11:12:33.597081    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/etcd-ha-450500-m02
	I0317 11:12:33.597081    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:33.597081    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:33.597081    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:33.601174    8508 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 11:12:33.601955    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:33.601955    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:33.601955    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:33.601955    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:33.605551    8508 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0317 11:12:33.605861    8508 pod_ready.go:93] pod "etcd-ha-450500-m02" in "kube-system" namespace has status "Ready":"True"
	I0317 11:12:33.605980    8508 pod_ready.go:82] duration metric: took 8.899ms for pod "etcd-ha-450500-m02" in "kube-system" namespace to be "Ready" ...
	I0317 11:12:33.605980    8508 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-450500" in "kube-system" namespace to be "Ready" ...
	I0317 11:12:33.754563    8508 request.go:661] Waited for 148.581ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-450500
	I0317 11:12:33.755133    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-450500
	I0317 11:12:33.755133    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:33.755133    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:33.755133    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:33.760942    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:12:33.955180    8508 request.go:661] Waited for 193.5967ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/nodes/ha-450500
	I0317 11:12:33.955180    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500
	I0317 11:12:33.955180    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:33.955180    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:33.955180    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:33.966174    8508 round_trippers.go:581] Response Status: 200 OK in 10 milliseconds
	I0317 11:12:33.966507    8508 pod_ready.go:93] pod "kube-apiserver-ha-450500" in "kube-system" namespace has status "Ready":"True"
	I0317 11:12:33.966507    8508 pod_ready.go:82] duration metric: took 360.5238ms for pod "kube-apiserver-ha-450500" in "kube-system" namespace to be "Ready" ...
	I0317 11:12:33.966507    8508 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-450500-m02" in "kube-system" namespace to be "Ready" ...
	I0317 11:12:34.154458    8508 request.go:661] Waited for 187.9492ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-450500-m02
	I0317 11:12:34.154458    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-450500-m02
	I0317 11:12:34.154458    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:34.154458    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:34.154458    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:34.161317    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:12:34.355330    8508 request.go:661] Waited for 193.5721ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:34.355330    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:34.355330    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:34.355330    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:34.355330    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:34.361875    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:12:34.362289    8508 pod_ready.go:93] pod "kube-apiserver-ha-450500-m02" in "kube-system" namespace has status "Ready":"True"
	I0317 11:12:34.362289    8508 pod_ready.go:82] duration metric: took 395.779ms for pod "kube-apiserver-ha-450500-m02" in "kube-system" namespace to be "Ready" ...
	I0317 11:12:34.362289    8508 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-450500" in "kube-system" namespace to be "Ready" ...
	I0317 11:12:34.554901    8508 request.go:661] Waited for 192.6106ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-450500
	I0317 11:12:34.555310    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-450500
	I0317 11:12:34.555310    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:34.555310    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:34.555310    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:34.559913    8508 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 11:12:34.754488    8508 request.go:661] Waited for 193.9322ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/nodes/ha-450500
	I0317 11:12:34.754488    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500
	I0317 11:12:34.754488    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:34.754488    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:34.754488    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:34.761347    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:12:34.761774    8508 pod_ready.go:93] pod "kube-controller-manager-ha-450500" in "kube-system" namespace has status "Ready":"True"
	I0317 11:12:34.761774    8508 pod_ready.go:82] duration metric: took 399.4824ms for pod "kube-controller-manager-ha-450500" in "kube-system" namespace to be "Ready" ...
	I0317 11:12:34.761834    8508 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-450500-m02" in "kube-system" namespace to be "Ready" ...
	I0317 11:12:34.954427    8508 request.go:661] Waited for 192.5119ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-450500-m02
	I0317 11:12:34.954427    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-450500-m02
	I0317 11:12:34.954427    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:34.954945    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:34.954945    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:34.960264    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:12:35.154734    8508 request.go:661] Waited for 193.9632ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:35.155257    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:35.155257    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:35.155257    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:35.155257    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:35.161400    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:12:35.162182    8508 pod_ready.go:93] pod "kube-controller-manager-ha-450500-m02" in "kube-system" namespace has status "Ready":"True"
	I0317 11:12:35.162303    8508 pod_ready.go:82] duration metric: took 400.4661ms for pod "kube-controller-manager-ha-450500-m02" in "kube-system" namespace to be "Ready" ...
	I0317 11:12:35.162303    8508 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fthkw" in "kube-system" namespace to be "Ready" ...
	I0317 11:12:35.354561    8508 request.go:661] Waited for 192.1187ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fthkw
	I0317 11:12:35.354561    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fthkw
	I0317 11:12:35.354561    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:35.354561    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:35.354561    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:35.360908    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:12:35.555745    8508 request.go:661] Waited for 194.6405ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:35.556296    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:35.556392    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:35.556392    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:35.556392    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:35.563530    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:12:35.563928    8508 pod_ready.go:93] pod "kube-proxy-fthkw" in "kube-system" namespace has status "Ready":"True"
	I0317 11:12:35.563928    8508 pod_ready.go:82] duration metric: took 401.6221ms for pod "kube-proxy-fthkw" in "kube-system" namespace to be "Ready" ...
	I0317 11:12:35.563928    8508 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-jzvxr" in "kube-system" namespace to be "Ready" ...
	I0317 11:12:35.754624    8508 request.go:661] Waited for 190.6943ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jzvxr
	I0317 11:12:35.755143    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jzvxr
	I0317 11:12:35.755143    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:35.755143    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:35.755286    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:35.764652    8508 round_trippers.go:581] Response Status: 200 OK in 9 milliseconds
	I0317 11:12:35.955484    8508 request.go:661] Waited for 190.8307ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/nodes/ha-450500
	I0317 11:12:35.955484    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500
	I0317 11:12:35.955484    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:35.955484    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:35.955484    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:35.961128    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:12:35.961561    8508 pod_ready.go:93] pod "kube-proxy-jzvxr" in "kube-system" namespace has status "Ready":"True"
	I0317 11:12:35.961561    8508 pod_ready.go:82] duration metric: took 397.6296ms for pod "kube-proxy-jzvxr" in "kube-system" namespace to be "Ready" ...
	I0317 11:12:35.961561    8508 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-450500" in "kube-system" namespace to be "Ready" ...
	I0317 11:12:36.155229    8508 request.go:661] Waited for 193.4257ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-450500
	I0317 11:12:36.155229    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-450500
	I0317 11:12:36.155876    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:36.155876    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:36.155919    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:36.161058    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:12:36.355653    8508 request.go:661] Waited for 194.2269ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/nodes/ha-450500
	I0317 11:12:36.355981    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500
	I0317 11:12:36.355981    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:36.356011    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:36.356011    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:36.361762    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:12:36.362194    8508 pod_ready.go:93] pod "kube-scheduler-ha-450500" in "kube-system" namespace has status "Ready":"True"
	I0317 11:12:36.362194    8508 pod_ready.go:82] duration metric: took 400.6297ms for pod "kube-scheduler-ha-450500" in "kube-system" namespace to be "Ready" ...
	I0317 11:12:36.362194    8508 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-450500-m02" in "kube-system" namespace to be "Ready" ...
	I0317 11:12:36.555115    8508 request.go:661] Waited for 192.7615ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-450500-m02
	I0317 11:12:36.555586    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-450500-m02
	I0317 11:12:36.555641    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:36.555641    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:36.555682    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:36.561551    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:12:36.755413    8508 request.go:661] Waited for 193.2923ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:36.755413    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:36.755413    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:36.755413    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:36.755413    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:36.764420    8508 round_trippers.go:581] Response Status: 200 OK in 9 milliseconds
	I0317 11:12:36.765314    8508 pod_ready.go:93] pod "kube-scheduler-ha-450500-m02" in "kube-system" namespace has status "Ready":"True"
	I0317 11:12:36.765314    8508 pod_ready.go:82] duration metric: took 403.1172ms for pod "kube-scheduler-ha-450500-m02" in "kube-system" namespace to be "Ready" ...
	I0317 11:12:36.765386    8508 pod_ready.go:39] duration metric: took 3.2058328s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 11:12:36.765386    8508 api_server.go:52] waiting for apiserver process to appear ...
	I0317 11:12:36.778349    8508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 11:12:36.810488    8508 api_server.go:72] duration metric: took 24.7029659s to wait for apiserver process to appear ...
	I0317 11:12:36.810581    8508 api_server.go:88] waiting for apiserver healthz status ...
	I0317 11:12:36.810581    8508 api_server.go:253] Checking apiserver healthz at https://172.25.16.34:8443/healthz ...
	I0317 11:12:36.826345    8508 api_server.go:279] https://172.25.16.34:8443/healthz returned 200:
	ok
	I0317 11:12:36.826548    8508 round_trippers.go:470] GET https://172.25.16.34:8443/version
	I0317 11:12:36.826548    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:36.826548    8508 round_trippers.go:480]     Accept: application/json, */*
	I0317 11:12:36.826548    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:36.828591    8508 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0317 11:12:36.828798    8508 api_server.go:141] control plane version: v1.32.2
	I0317 11:12:36.828798    8508 api_server.go:131] duration metric: took 18.2166ms to wait for apiserver health ...
	I0317 11:12:36.828896    8508 system_pods.go:43] waiting for kube-system pods to appear ...
	I0317 11:12:36.954895    8508 request.go:661] Waited for 125.882ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods
	I0317 11:12:36.955530    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods
	I0317 11:12:36.955530    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:36.955530    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:36.955530    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:36.962711    8508 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0317 11:12:36.965128    8508 system_pods.go:59] 17 kube-system pods found
	I0317 11:12:36.965189    8508 system_pods.go:61] "coredns-668d6bf9bc-qd2nj" [1f982191-c45a-4681-907d-a0d9220b1f77] Running
	I0317 11:12:36.965251    8508 system_pods.go:61] "coredns-668d6bf9bc-rhhkv" [0dc113a4-430f-4c5b-bc05-05d8cc014ed7] Running
	I0317 11:12:36.965251    8508 system_pods.go:61] "etcd-ha-450500" [7a735c5c-89ec-488d-95c8-f7fa1160fa3c] Running
	I0317 11:12:36.965251    8508 system_pods.go:61] "etcd-ha-450500-m02" [78234926-bd4c-41f7-9f48-43e2bcd543a4] Running
	I0317 11:12:36.965251    8508 system_pods.go:61] "kindnet-ch8f7" [6247c683-e723-4b72-b373-89cb4f1b576d] Running
	I0317 11:12:36.965251    8508 system_pods.go:61] "kindnet-prwhr" [0f7a825d-bd7c-4428-8685-cfa8926ef827] Running
	I0317 11:12:36.965251    8508 system_pods.go:61] "kube-apiserver-ha-450500" [232d0746-fa49-4d17-b36e-557164865a8f] Running
	I0317 11:12:36.965251    8508 system_pods.go:61] "kube-apiserver-ha-450500-m02" [a445c598-90ff-4e54-a96f-3d206a54a108] Running
	I0317 11:12:36.965314    8508 system_pods.go:61] "kube-controller-manager-ha-450500" [afff30e2-5638-4b0f-bbe0-ec65ff25eef4] Running
	I0317 11:12:36.965314    8508 system_pods.go:61] "kube-controller-manager-ha-450500-m02" [e931480a-7ff2-4264-bef5-ff129c603e77] Running
	I0317 11:12:36.965314    8508 system_pods.go:61] "kube-proxy-fthkw" [8a9b1cd2-2eb8-49ac-8cc5-df138d6d0670] Running
	I0317 11:12:36.965314    8508 system_pods.go:61] "kube-proxy-jzvxr" [eeae069e-8b5a-449e-9fe2-ecafcd9733eb] Running
	I0317 11:12:36.965314    8508 system_pods.go:61] "kube-scheduler-ha-450500" [66d11ee5-babc-401c-bf3f-8bd94eb09e06] Running
	I0317 11:12:36.965375    8508 system_pods.go:61] "kube-scheduler-ha-450500-m02" [053f7a43-bff9-4dc7-8d8d-126f265dfbed] Running
	I0317 11:12:36.965375    8508 system_pods.go:61] "kube-vip-ha-450500" [55e8c247-e66e-47b2-b766-ead538ec0b9a] Running
	I0317 11:12:36.965375    8508 system_pods.go:61] "kube-vip-ha-450500-m02" [df1997ea-6b13-4caa-a041-2c63da793276] Running
	I0317 11:12:36.965414    8508 system_pods.go:61] "storage-provisioner" [b1a4725e-cedf-428a-a320-59d7374cba0d] Running
	I0317 11:12:36.965414    8508 system_pods.go:74] duration metric: took 136.5164ms to wait for pod list to return data ...
	I0317 11:12:36.965414    8508 default_sa.go:34] waiting for default service account to be created ...
	I0317 11:12:37.154827    8508 request.go:661] Waited for 189.2947ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/namespaces/default/serviceaccounts
	I0317 11:12:37.154827    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/default/serviceaccounts
	I0317 11:12:37.154827    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:37.154827    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:37.154827    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:37.164412    8508 round_trippers.go:581] Response Status: 200 OK in 9 milliseconds
	I0317 11:12:37.164577    8508 default_sa.go:45] found service account: "default"
	I0317 11:12:37.164577    8508 default_sa.go:55] duration metric: took 199.1615ms for default service account to be created ...
	I0317 11:12:37.164577    8508 system_pods.go:116] waiting for k8s-apps to be running ...
	I0317 11:12:37.355377    8508 request.go:661] Waited for 190.7988ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods
	I0317 11:12:37.355377    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods
	I0317 11:12:37.355377    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:37.355377    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:37.355377    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:37.362316    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:12:37.364647    8508 system_pods.go:86] 17 kube-system pods found
	I0317 11:12:37.364647    8508 system_pods.go:89] "coredns-668d6bf9bc-qd2nj" [1f982191-c45a-4681-907d-a0d9220b1f77] Running
	I0317 11:12:37.364647    8508 system_pods.go:89] "coredns-668d6bf9bc-rhhkv" [0dc113a4-430f-4c5b-bc05-05d8cc014ed7] Running
	I0317 11:12:37.364647    8508 system_pods.go:89] "etcd-ha-450500" [7a735c5c-89ec-488d-95c8-f7fa1160fa3c] Running
	I0317 11:12:37.364647    8508 system_pods.go:89] "etcd-ha-450500-m02" [78234926-bd4c-41f7-9f48-43e2bcd543a4] Running
	I0317 11:12:37.364647    8508 system_pods.go:89] "kindnet-ch8f7" [6247c683-e723-4b72-b373-89cb4f1b576d] Running
	I0317 11:12:37.364647    8508 system_pods.go:89] "kindnet-prwhr" [0f7a825d-bd7c-4428-8685-cfa8926ef827] Running
	I0317 11:12:37.364647    8508 system_pods.go:89] "kube-apiserver-ha-450500" [232d0746-fa49-4d17-b36e-557164865a8f] Running
	I0317 11:12:37.364647    8508 system_pods.go:89] "kube-apiserver-ha-450500-m02" [a445c598-90ff-4e54-a96f-3d206a54a108] Running
	I0317 11:12:37.364647    8508 system_pods.go:89] "kube-controller-manager-ha-450500" [afff30e2-5638-4b0f-bbe0-ec65ff25eef4] Running
	I0317 11:12:37.364647    8508 system_pods.go:89] "kube-controller-manager-ha-450500-m02" [e931480a-7ff2-4264-bef5-ff129c603e77] Running
	I0317 11:12:37.364647    8508 system_pods.go:89] "kube-proxy-fthkw" [8a9b1cd2-2eb8-49ac-8cc5-df138d6d0670] Running
	I0317 11:12:37.364647    8508 system_pods.go:89] "kube-proxy-jzvxr" [eeae069e-8b5a-449e-9fe2-ecafcd9733eb] Running
	I0317 11:12:37.364647    8508 system_pods.go:89] "kube-scheduler-ha-450500" [66d11ee5-babc-401c-bf3f-8bd94eb09e06] Running
	I0317 11:12:37.364647    8508 system_pods.go:89] "kube-scheduler-ha-450500-m02" [053f7a43-bff9-4dc7-8d8d-126f265dfbed] Running
	I0317 11:12:37.364647    8508 system_pods.go:89] "kube-vip-ha-450500" [55e8c247-e66e-47b2-b766-ead538ec0b9a] Running
	I0317 11:12:37.364647    8508 system_pods.go:89] "kube-vip-ha-450500-m02" [df1997ea-6b13-4caa-a041-2c63da793276] Running
	I0317 11:12:37.365246    8508 system_pods.go:89] "storage-provisioner" [b1a4725e-cedf-428a-a320-59d7374cba0d] Running
	I0317 11:12:37.365246    8508 system_pods.go:126] duration metric: took 200.668ms to wait for k8s-apps to be running ...
	I0317 11:12:37.365292    8508 system_svc.go:44] waiting for kubelet service to be running ....
	I0317 11:12:37.375568    8508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0317 11:12:37.401869    8508 system_svc.go:56] duration metric: took 36.5231ms WaitForService to wait for kubelet
	I0317 11:12:37.401869    8508 kubeadm.go:582] duration metric: took 25.2943424s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0317 11:12:37.401935    8508 node_conditions.go:102] verifying NodePressure condition ...
	I0317 11:12:37.554482    8508 request.go:661] Waited for 152.4698ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/nodes
	I0317 11:12:37.554482    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes
	I0317 11:12:37.554950    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:37.554950    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:37.554950    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:37.566997    8508 round_trippers.go:581] Response Status: 200 OK in 11 milliseconds
	I0317 11:12:37.567691    8508 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0317 11:12:37.567691    8508 node_conditions.go:123] node cpu capacity is 2
	I0317 11:12:37.567776    8508 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0317 11:12:37.567776    8508 node_conditions.go:123] node cpu capacity is 2
	I0317 11:12:37.567776    8508 node_conditions.go:105] duration metric: took 165.8395ms to run NodePressure ...
	I0317 11:12:37.567776    8508 start.go:241] waiting for startup goroutines ...
	I0317 11:12:37.567992    8508 start.go:255] writing updated cluster config ...
	I0317 11:12:37.572518    8508 out.go:201] 
	I0317 11:12:37.591469    8508 config.go:182] Loaded profile config "ha-450500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0317 11:12:37.592426    8508 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\config.json ...
	I0317 11:12:37.598440    8508 out.go:177] * Starting "ha-450500-m03" control-plane node in "ha-450500" cluster
	I0317 11:12:37.602446    8508 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0317 11:12:37.602446    8508 cache.go:56] Caching tarball of preloaded images
	I0317 11:12:37.602760    8508 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0317 11:12:37.602760    8508 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on docker
	I0317 11:12:37.603382    8508 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\config.json ...
	I0317 11:12:37.610585    8508 start.go:360] acquireMachinesLock for ha-450500-m03: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0317 11:12:37.611355    8508 start.go:364] duration metric: took 673.4µs to acquireMachinesLock for "ha-450500-m03"
	I0317 11:12:37.611423    8508 start.go:93] Provisioning new machine with config: &{Name:ha-450500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName
:ha-450500 Namespace:default APIServerHAVIP:172.25.31.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.16.34 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.25.21.189 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false insp
ektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0317 11:12:37.611423    8508 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0317 11:12:37.615250    8508 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0317 11:12:37.615250    8508 start.go:159] libmachine.API.Create for "ha-450500" (driver="hyperv")
	I0317 11:12:37.615250    8508 client.go:168] LocalClient.Create starting
	I0317 11:12:37.616765    8508 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0317 11:12:37.617305    8508 main.go:141] libmachine: Decoding PEM data...
	I0317 11:12:37.617698    8508 main.go:141] libmachine: Parsing certificate...
	I0317 11:12:37.618070    8508 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0317 11:12:37.618476    8508 main.go:141] libmachine: Decoding PEM data...
	I0317 11:12:37.618476    8508 main.go:141] libmachine: Parsing certificate...
	I0317 11:12:37.619108    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0317 11:12:39.643873    8508 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0317 11:12:39.643873    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:12:39.644116    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0317 11:12:41.449478    8508 main.go:141] libmachine: [stdout =====>] : False
	
	I0317 11:12:41.449478    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:12:41.449590    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0317 11:12:42.979010    8508 main.go:141] libmachine: [stdout =====>] : True
	
	I0317 11:12:42.979010    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:12:42.979157    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0317 11:12:46.759725    8508 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0317 11:12:46.759997    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:12:46.762290    8508 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0317 11:12:47.243667    8508 main.go:141] libmachine: Creating SSH key...
	I0317 11:12:47.590392    8508 main.go:141] libmachine: Creating VM...
	I0317 11:12:47.590392    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0317 11:12:50.575729    8508 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0317 11:12:50.575729    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:12:50.576567    8508 main.go:141] libmachine: Using switch "Default Switch"
	I0317 11:12:50.576648    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0317 11:12:52.406332    8508 main.go:141] libmachine: [stdout =====>] : True
	
	I0317 11:12:52.406332    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:12:52.406985    8508 main.go:141] libmachine: Creating VHD
	I0317 11:12:52.406985    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0317 11:12:56.278046    8508 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 87EC2771-5F2A-4102-A38E-D9D489CF70CB
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0317 11:12:56.278216    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:12:56.278216    8508 main.go:141] libmachine: Writing magic tar header
	I0317 11:12:56.278303    8508 main.go:141] libmachine: Writing SSH key tar header
	I0317 11:12:56.291816    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0317 11:12:59.532486    8508 main.go:141] libmachine: [stdout =====>] : 
	I0317 11:12:59.532819    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:12:59.532819    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500-m03\disk.vhd' -SizeBytes 20000MB
	I0317 11:13:02.156812    8508 main.go:141] libmachine: [stdout =====>] : 
	I0317 11:13:02.156812    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:13:02.157608    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-450500-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0317 11:13:05.849485    8508 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-450500-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0317 11:13:05.849485    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:13:05.849485    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-450500-m03 -DynamicMemoryEnabled $false
	I0317 11:13:08.201596    8508 main.go:141] libmachine: [stdout =====>] : 
	I0317 11:13:08.201596    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:13:08.201843    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-450500-m03 -Count 2
	I0317 11:13:10.450951    8508 main.go:141] libmachine: [stdout =====>] : 
	I0317 11:13:10.450951    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:13:10.451635    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-450500-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500-m03\boot2docker.iso'
	I0317 11:13:13.095724    8508 main.go:141] libmachine: [stdout =====>] : 
	I0317 11:13:13.095724    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:13:13.096027    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-450500-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500-m03\disk.vhd'
	I0317 11:13:15.777774    8508 main.go:141] libmachine: [stdout =====>] : 
	I0317 11:13:15.778509    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:13:15.778509    8508 main.go:141] libmachine: Starting VM...
	I0317 11:13:15.778732    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-450500-m03
	I0317 11:13:18.895602    8508 main.go:141] libmachine: [stdout =====>] : 
	I0317 11:13:18.895602    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:13:18.895602    8508 main.go:141] libmachine: Waiting for host to start...
	I0317 11:13:18.895602    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m03 ).state
	I0317 11:13:21.222654    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:13:21.222654    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:13:21.222654    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m03 ).networkadapters[0]).ipaddresses[0]
	I0317 11:13:23.777334    8508 main.go:141] libmachine: [stdout =====>] : 
	I0317 11:13:23.777334    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:13:24.778323    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m03 ).state
	I0317 11:13:27.105716    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:13:27.106610    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:13:27.106810    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m03 ).networkadapters[0]).ipaddresses[0]
	I0317 11:13:29.729891    8508 main.go:141] libmachine: [stdout =====>] : 
	I0317 11:13:29.729891    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:13:30.730598    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m03 ).state
	I0317 11:13:32.974753    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:13:32.974753    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:13:32.975022    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m03 ).networkadapters[0]).ipaddresses[0]
	I0317 11:13:35.533118    8508 main.go:141] libmachine: [stdout =====>] : 
	I0317 11:13:35.533118    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:13:36.533356    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m03 ).state
	I0317 11:13:38.844572    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:13:38.844572    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:13:38.845404    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m03 ).networkadapters[0]).ipaddresses[0]
	I0317 11:13:41.434725    8508 main.go:141] libmachine: [stdout =====>] : 
	I0317 11:13:41.435525    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:13:42.436521    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m03 ).state
	I0317 11:13:44.685204    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:13:44.685204    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:13:44.685204    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m03 ).networkadapters[0]).ipaddresses[0]
	I0317 11:13:47.319208    8508 main.go:141] libmachine: [stdout =====>] : 172.25.19.102
	
	I0317 11:13:47.319208    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:13:47.319409    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m03 ).state
	I0317 11:13:49.469774    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:13:49.469774    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:13:49.469774    8508 machine.go:93] provisionDockerMachine start ...
	I0317 11:13:49.469774    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m03 ).state
	I0317 11:13:51.692302    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:13:51.693264    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:13:51.693264    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m03 ).networkadapters[0]).ipaddresses[0]
	I0317 11:13:54.299861    8508 main.go:141] libmachine: [stdout =====>] : 172.25.19.102
	
	I0317 11:13:54.299861    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:13:54.306223    8508 main.go:141] libmachine: Using SSH client type: native
	I0317 11:13:54.306337    8508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.19.102 22 <nil> <nil>}
	I0317 11:13:54.306337    8508 main.go:141] libmachine: About to run SSH command:
	hostname
	I0317 11:13:54.437799    8508 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0317 11:13:54.437799    8508 buildroot.go:166] provisioning hostname "ha-450500-m03"
	I0317 11:13:54.438341    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m03 ).state
	I0317 11:13:56.609069    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:13:56.609069    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:13:56.609069    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m03 ).networkadapters[0]).ipaddresses[0]
	I0317 11:13:59.186271    8508 main.go:141] libmachine: [stdout =====>] : 172.25.19.102
	
	I0317 11:13:59.187118    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:13:59.192671    8508 main.go:141] libmachine: Using SSH client type: native
	I0317 11:13:59.193442    8508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.19.102 22 <nil> <nil>}
	I0317 11:13:59.193442    8508 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-450500-m03 && echo "ha-450500-m03" | sudo tee /etc/hostname
	I0317 11:13:59.349997    8508 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-450500-m03
	
	I0317 11:13:59.349997    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m03 ).state
	I0317 11:14:01.559393    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:14:01.559393    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:14:01.559393    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m03 ).networkadapters[0]).ipaddresses[0]
	I0317 11:14:04.193668    8508 main.go:141] libmachine: [stdout =====>] : 172.25.19.102
	
	I0317 11:14:04.194703    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:14:04.200593    8508 main.go:141] libmachine: Using SSH client type: native
	I0317 11:14:04.201252    8508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.19.102 22 <nil> <nil>}
	I0317 11:14:04.201252    8508 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-450500-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-450500-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-450500-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0317 11:14:04.345564    8508 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0317 11:14:04.345564    8508 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0317 11:14:04.345564    8508 buildroot.go:174] setting up certificates
	I0317 11:14:04.345564    8508 provision.go:84] configureAuth start
	I0317 11:14:04.346102    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m03 ).state
	I0317 11:14:06.604863    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:14:06.605362    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:14:06.605362    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m03 ).networkadapters[0]).ipaddresses[0]
	I0317 11:14:09.195920    8508 main.go:141] libmachine: [stdout =====>] : 172.25.19.102
	
	I0317 11:14:09.195920    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:14:09.195920    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m03 ).state
	I0317 11:14:11.393420    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:14:11.393420    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:14:11.393420    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m03 ).networkadapters[0]).ipaddresses[0]
	I0317 11:14:13.971892    8508 main.go:141] libmachine: [stdout =====>] : 172.25.19.102
	
	I0317 11:14:13.972826    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:14:13.972909    8508 provision.go:143] copyHostCerts
	I0317 11:14:13.973126    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0317 11:14:13.973464    8508 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0317 11:14:13.973528    8508 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0317 11:14:13.973633    8508 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0317 11:14:13.974529    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0317 11:14:13.975057    8508 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0317 11:14:13.975057    8508 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0317 11:14:13.975256    8508 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0317 11:14:13.976095    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0317 11:14:13.976095    8508 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0317 11:14:13.976095    8508 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0317 11:14:13.976785    8508 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0317 11:14:13.977584    8508 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-450500-m03 san=[127.0.0.1 172.25.19.102 ha-450500-m03 localhost minikube]
	I0317 11:14:14.393909    8508 provision.go:177] copyRemoteCerts
	I0317 11:14:14.406073    8508 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0317 11:14:14.406147    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m03 ).state
	I0317 11:14:16.572409    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:14:16.572575    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:14:16.572686    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m03 ).networkadapters[0]).ipaddresses[0]
	I0317 11:14:19.184285    8508 main.go:141] libmachine: [stdout =====>] : 172.25.19.102
	
	I0317 11:14:19.184285    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:14:19.185227    8508 sshutil.go:53] new ssh client: &{IP:172.25.19.102 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500-m03\id_rsa Username:docker}
	I0317 11:14:19.290840    8508 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8847299s)
	I0317 11:14:19.290840    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0317 11:14:19.290840    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0317 11:14:19.336651    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0317 11:14:19.337244    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0317 11:14:19.382465    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0317 11:14:19.382938    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0317 11:14:19.427140    8508 provision.go:87] duration metric: took 15.0814127s to configureAuth
	I0317 11:14:19.427219    8508 buildroot.go:189] setting minikube options for container-runtime
	I0317 11:14:19.427871    8508 config.go:182] Loaded profile config "ha-450500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0317 11:14:19.427871    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m03 ).state
	I0317 11:14:21.602097    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:14:21.602097    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:14:21.602748    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m03 ).networkadapters[0]).ipaddresses[0]
	I0317 11:14:24.225066    8508 main.go:141] libmachine: [stdout =====>] : 172.25.19.102
	
	I0317 11:14:24.225533    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:14:24.232188    8508 main.go:141] libmachine: Using SSH client type: native
	I0317 11:14:24.232849    8508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.19.102 22 <nil> <nil>}
	I0317 11:14:24.232849    8508 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0317 11:14:24.374274    8508 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0317 11:14:24.374274    8508 buildroot.go:70] root file system type: tmpfs
	I0317 11:14:24.374274    8508 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0317 11:14:24.374996    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m03 ).state
	I0317 11:14:26.512583    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:14:26.512583    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:14:26.513244    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m03 ).networkadapters[0]).ipaddresses[0]
	I0317 11:14:29.112887    8508 main.go:141] libmachine: [stdout =====>] : 172.25.19.102
	
	I0317 11:14:29.112887    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:14:29.119456    8508 main.go:141] libmachine: Using SSH client type: native
	I0317 11:14:29.120304    8508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.19.102 22 <nil> <nil>}
	I0317 11:14:29.120593    8508 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.25.16.34"
	Environment="NO_PROXY=172.25.16.34,172.25.21.189"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0317 11:14:29.288750    8508 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.25.16.34
	Environment=NO_PROXY=172.25.16.34,172.25.21.189
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0317 11:14:29.288750    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m03 ).state
	I0317 11:14:31.483668    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:14:31.483668    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:14:31.483668    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m03 ).networkadapters[0]).ipaddresses[0]
	I0317 11:14:34.092330    8508 main.go:141] libmachine: [stdout =====>] : 172.25.19.102
	
	I0317 11:14:34.092330    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:14:34.098474    8508 main.go:141] libmachine: Using SSH client type: native
	I0317 11:14:34.098474    8508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.19.102 22 <nil> <nil>}
	I0317 11:14:34.099000    8508 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0317 11:14:36.347713    8508 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0317 11:14:36.347713    8508 machine.go:96] duration metric: took 46.8775827s to provisionDockerMachine
	I0317 11:14:36.347713    8508 client.go:171] duration metric: took 1m58.7315615s to LocalClient.Create
	I0317 11:14:36.347713    8508 start.go:167] duration metric: took 1m58.7315615s to libmachine.API.Create "ha-450500"
	I0317 11:14:36.347713    8508 start.go:293] postStartSetup for "ha-450500-m03" (driver="hyperv")
	I0317 11:14:36.347713    8508 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0317 11:14:36.359713    8508 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0317 11:14:36.359713    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m03 ).state
	I0317 11:14:38.594378    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:14:38.595363    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:14:38.595437    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m03 ).networkadapters[0]).ipaddresses[0]
	I0317 11:14:41.168190    8508 main.go:141] libmachine: [stdout =====>] : 172.25.19.102
	
	I0317 11:14:41.168283    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:14:41.168448    8508 sshutil.go:53] new ssh client: &{IP:172.25.19.102 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500-m03\id_rsa Username:docker}
	I0317 11:14:41.274774    8508 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9150236s)
	I0317 11:14:41.287102    8508 ssh_runner.go:195] Run: cat /etc/os-release
	I0317 11:14:41.295877    8508 info.go:137] Remote host: Buildroot 2023.02.9
	I0317 11:14:41.295877    8508 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0317 11:14:41.296515    8508 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0317 11:14:41.297191    8508 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem -> 89402.pem in /etc/ssl/certs
	I0317 11:14:41.297191    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem -> /etc/ssl/certs/89402.pem
	I0317 11:14:41.308357    8508 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0317 11:14:41.329650    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem --> /etc/ssl/certs/89402.pem (1708 bytes)
	I0317 11:14:41.377920    8508 start.go:296] duration metric: took 5.0301692s for postStartSetup
	I0317 11:14:41.380919    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m03 ).state
	I0317 11:14:43.562760    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:14:43.563755    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:14:43.563755    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m03 ).networkadapters[0]).ipaddresses[0]
	I0317 11:14:46.135072    8508 main.go:141] libmachine: [stdout =====>] : 172.25.19.102
	
	I0317 11:14:46.135625    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:14:46.135625    8508 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\config.json ...
	I0317 11:14:46.138914    8508 start.go:128] duration metric: took 2m8.5265152s to createHost
	I0317 11:14:46.139700    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m03 ).state
	I0317 11:14:48.418326    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:14:48.418326    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:14:48.418326    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m03 ).networkadapters[0]).ipaddresses[0]
	I0317 11:14:50.993580    8508 main.go:141] libmachine: [stdout =====>] : 172.25.19.102
	
	I0317 11:14:50.993580    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:14:50.999925    8508 main.go:141] libmachine: Using SSH client type: native
	I0317 11:14:51.000480    8508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.19.102 22 <nil> <nil>}
	I0317 11:14:51.000536    8508 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0317 11:14:51.133447    8508 main.go:141] libmachine: SSH cmd err, output: <nil>: 1742210091.155605387
	
	I0317 11:14:51.133447    8508 fix.go:216] guest clock: 1742210091.155605387
	I0317 11:14:51.133447    8508 fix.go:229] Guest: 2025-03-17 11:14:51.155605387 +0000 UTC Remote: 2025-03-17 11:14:46.1394743 +0000 UTC m=+569.588783001 (delta=5.016131087s)
	I0317 11:14:51.133447    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m03 ).state
	I0317 11:14:53.370296    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:14:53.370296    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:14:53.370296    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m03 ).networkadapters[0]).ipaddresses[0]
	I0317 11:14:55.953166    8508 main.go:141] libmachine: [stdout =====>] : 172.25.19.102
	
	I0317 11:14:55.953166    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:14:55.960437    8508 main.go:141] libmachine: Using SSH client type: native
	I0317 11:14:55.961153    8508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.19.102 22 <nil> <nil>}
	I0317 11:14:55.961153    8508 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1742210091
	I0317 11:14:56.103828    8508 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Mar 17 11:14:51 UTC 2025
	
	I0317 11:14:56.103828    8508 fix.go:236] clock set: Mon Mar 17 11:14:51 UTC 2025
	 (err=<nil>)
	I0317 11:14:56.103828    8508 start.go:83] releasing machines lock for "ha-450500-m03", held for 2m18.4914218s
	I0317 11:14:56.104405    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m03 ).state
	I0317 11:14:58.286278    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:14:58.286278    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:14:58.286378    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m03 ).networkadapters[0]).ipaddresses[0]
	I0317 11:15:00.905094    8508 main.go:141] libmachine: [stdout =====>] : 172.25.19.102
	
	I0317 11:15:00.905172    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:15:00.908982    8508 out.go:177] * Found network options:
	I0317 11:15:00.913265    8508 out.go:177]   - NO_PROXY=172.25.16.34,172.25.21.189
	W0317 11:15:00.916722    8508 proxy.go:119] fail to check proxy env: Error ip not in block
	W0317 11:15:00.916786    8508 proxy.go:119] fail to check proxy env: Error ip not in block
	I0317 11:15:00.920940    8508 out.go:177]   - NO_PROXY=172.25.16.34,172.25.21.189
	W0317 11:15:00.924920    8508 proxy.go:119] fail to check proxy env: Error ip not in block
	W0317 11:15:00.924920    8508 proxy.go:119] fail to check proxy env: Error ip not in block
	W0317 11:15:00.925951    8508 proxy.go:119] fail to check proxy env: Error ip not in block
	W0317 11:15:00.925951    8508 proxy.go:119] fail to check proxy env: Error ip not in block
	I0317 11:15:00.927912    8508 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0317 11:15:00.928989    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m03 ).state
	I0317 11:15:00.939909    8508 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0317 11:15:00.939909    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m03 ).state
	I0317 11:15:03.233672    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:15:03.234099    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:15:03.234099    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m03 ).networkadapters[0]).ipaddresses[0]
	I0317 11:15:03.250398    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:15:03.250398    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:15:03.250398    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m03 ).networkadapters[0]).ipaddresses[0]
	I0317 11:15:05.926492    8508 main.go:141] libmachine: [stdout =====>] : 172.25.19.102
	
	I0317 11:15:05.926895    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:15:05.927304    8508 sshutil.go:53] new ssh client: &{IP:172.25.19.102 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500-m03\id_rsa Username:docker}
	I0317 11:15:05.952454    8508 main.go:141] libmachine: [stdout =====>] : 172.25.19.102
	
	I0317 11:15:05.952454    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:15:05.952925    8508 sshutil.go:53] new ssh client: &{IP:172.25.19.102 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500-m03\id_rsa Username:docker}
	I0317 11:15:06.017777    8508 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.0777044s)
	W0317 11:15:06.017777    8508 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0317 11:15:06.035326    8508 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0317 11:15:06.038290    8508 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.1092615s)
	W0317 11:15:06.038290    8508 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0317 11:15:06.067494    8508 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0317 11:15:06.067494    8508 start.go:495] detecting cgroup driver to use...
	I0317 11:15:06.067667    8508 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0317 11:15:06.116328    8508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0317 11:15:06.150225    8508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W0317 11:15:06.160731    8508 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0317 11:15:06.160764    8508 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0317 11:15:06.176524    8508 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0317 11:15:06.193279    8508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0317 11:15:06.226389    8508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0317 11:15:06.259499    8508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0317 11:15:06.294519    8508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0317 11:15:06.326057    8508 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0317 11:15:06.359772    8508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0317 11:15:06.391326    8508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0317 11:15:06.423236    8508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0317 11:15:06.454021    8508 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0317 11:15:06.472904    8508 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0317 11:15:06.485086    8508 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0317 11:15:06.518907    8508 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0317 11:15:06.546483    8508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 11:15:06.760165    8508 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0317 11:15:06.793752    8508 start.go:495] detecting cgroup driver to use...
	I0317 11:15:06.804956    8508 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0317 11:15:06.842822    8508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0317 11:15:06.877224    8508 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0317 11:15:06.920453    8508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0317 11:15:06.961695    8508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0317 11:15:07.001298    8508 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0317 11:15:07.070070    8508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0317 11:15:07.094206    8508 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0317 11:15:07.140566    8508 ssh_runner.go:195] Run: which cri-dockerd
	I0317 11:15:07.159026    8508 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0317 11:15:07.178431    8508 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0317 11:15:07.220820    8508 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0317 11:15:07.411288    8508 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0317 11:15:07.603763    8508 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0317 11:15:07.603763    8508 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0317 11:15:07.646468    8508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 11:15:07.849408    8508 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0317 11:15:10.484639    8508 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6351274s)
	I0317 11:15:10.497265    8508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0317 11:15:10.533379    8508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0317 11:15:10.569879    8508 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0317 11:15:10.786050    8508 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0317 11:15:10.991152    8508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 11:15:11.211713    8508 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0317 11:15:11.264048    8508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0317 11:15:11.299904    8508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 11:15:11.502945    8508 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0317 11:15:11.621306    8508 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0317 11:15:11.634008    8508 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0317 11:15:11.642887    8508 start.go:563] Will wait 60s for crictl version
	I0317 11:15:11.652876    8508 ssh_runner.go:195] Run: which crictl
	I0317 11:15:11.670718    8508 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0317 11:15:11.729357    8508 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0317 11:15:11.737916    8508 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0317 11:15:11.787651    8508 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0317 11:15:11.829755    8508 out.go:235] * Preparing Kubernetes v1.32.2 on Docker 27.4.0 ...
	I0317 11:15:11.832590    8508 out.go:177]   - env NO_PROXY=172.25.16.34
	I0317 11:15:11.835962    8508 out.go:177]   - env NO_PROXY=172.25.16.34,172.25.21.189
	I0317 11:15:11.838027    8508 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0317 11:15:11.842073    8508 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0317 11:15:11.842073    8508 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0317 11:15:11.842073    8508 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0317 11:15:11.842073    8508 ip.go:211] Found interface: {Index:13 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:4b:84:d5 Flags:up|broadcast|multicast|running}
	I0317 11:15:11.844144    8508 ip.go:214] interface addr: fe80::f0c7:c31c:6237:ef35/64
	I0317 11:15:11.844144    8508 ip.go:214] interface addr: 172.25.16.1/20
	I0317 11:15:11.856174    8508 ssh_runner.go:195] Run: grep 172.25.16.1	host.minikube.internal$ /etc/hosts
	I0317 11:15:11.862740    8508 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.25.16.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 11:15:11.885158    8508 mustload.go:65] Loading cluster: ha-450500
	I0317 11:15:11.886072    8508 config.go:182] Loaded profile config "ha-450500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0317 11:15:11.886961    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500 ).state
	I0317 11:15:14.039199    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:15:14.039199    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:15:14.039379    8508 host.go:66] Checking if "ha-450500" exists ...
	I0317 11:15:14.040412    8508 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500 for IP: 172.25.19.102
	I0317 11:15:14.040491    8508 certs.go:194] generating shared ca certs ...
	I0317 11:15:14.040491    8508 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:15:14.041140    8508 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0317 11:15:14.041502    8508 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0317 11:15:14.041502    8508 certs.go:256] generating profile certs ...
	I0317 11:15:14.042105    8508 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\client.key
	I0317 11:15:14.042105    8508 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.key.99204d39
	I0317 11:15:14.042105    8508 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.crt.99204d39 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.25.16.34 172.25.21.189 172.25.19.102 172.25.31.254]
	I0317 11:15:14.240081    8508 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.crt.99204d39 ...
	I0317 11:15:14.240081    8508 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.crt.99204d39: {Name:mk255eb8c6c9ec06403e380d9b5b4bdaba94ffb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:15:14.242749    8508 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.key.99204d39 ...
	I0317 11:15:14.242749    8508 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.key.99204d39: {Name:mk8f0af1d56c3096cbfdc7ace52600645aafb8e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:15:14.243735    8508 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.crt.99204d39 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.crt
	I0317 11:15:14.258744    8508 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.key.99204d39 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.key
	I0317 11:15:14.264208    8508 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\proxy-client.key
	I0317 11:15:14.264208    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0317 11:15:14.264208    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0317 11:15:14.265008    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0317 11:15:14.265223    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0317 11:15:14.265434    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0317 11:15:14.265434    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0317 11:15:14.265972    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0317 11:15:14.266178    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0317 11:15:14.266229    8508 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8940.pem (1338 bytes)
	W0317 11:15:14.267031    8508 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8940_empty.pem, impossibly tiny 0 bytes
	I0317 11:15:14.267054    8508 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0317 11:15:14.267054    8508 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0317 11:15:14.267596    8508 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0317 11:15:14.268009    8508 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0317 11:15:14.268009    8508 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem (1708 bytes)
	I0317 11:15:14.268697    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0317 11:15:14.268839    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8940.pem -> /usr/share/ca-certificates/8940.pem
	I0317 11:15:14.268839    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem -> /usr/share/ca-certificates/89402.pem
	I0317 11:15:14.268839    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500 ).state
	I0317 11:15:16.427874    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:15:16.427874    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:15:16.427874    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500 ).networkadapters[0]).ipaddresses[0]
	I0317 11:15:19.024483    8508 main.go:141] libmachine: [stdout =====>] : 172.25.16.34
	
	I0317 11:15:19.024585    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:15:19.025160    8508 sshutil.go:53] new ssh client: &{IP:172.25.16.34 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500\id_rsa Username:docker}
	I0317 11:15:19.129159    8508 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0317 11:15:19.138532    8508 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0317 11:15:19.176902    8508 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0317 11:15:19.183881    8508 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0317 11:15:19.213843    8508 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0317 11:15:19.220438    8508 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0317 11:15:19.249031    8508 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0317 11:15:19.255289    8508 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0317 11:15:19.284043    8508 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0317 11:15:19.290482    8508 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0317 11:15:19.322049    8508 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0317 11:15:19.329399    8508 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0317 11:15:19.348582    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0317 11:15:19.398357    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0317 11:15:19.442075    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0317 11:15:19.496229    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0317 11:15:19.541893    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0317 11:15:19.587925    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0317 11:15:19.633371    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0317 11:15:19.685478    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0317 11:15:19.734152    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0317 11:15:19.776998    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8940.pem --> /usr/share/ca-certificates/8940.pem (1338 bytes)
	I0317 11:15:19.821388    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem --> /usr/share/ca-certificates/89402.pem (1708 bytes)
	I0317 11:15:19.866321    8508 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0317 11:15:19.897359    8508 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0317 11:15:19.929307    8508 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0317 11:15:19.962392    8508 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0317 11:15:19.992834    8508 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0317 11:15:20.023152    8508 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0317 11:15:20.055363    8508 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0317 11:15:20.096735    8508 ssh_runner.go:195] Run: openssl version
	I0317 11:15:20.115931    8508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0317 11:15:20.146722    8508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0317 11:15:20.154700    8508 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 17 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0317 11:15:20.166295    8508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0317 11:15:20.185407    8508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0317 11:15:20.217335    8508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8940.pem && ln -fs /usr/share/ca-certificates/8940.pem /etc/ssl/certs/8940.pem"
	I0317 11:15:20.246143    8508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8940.pem
	I0317 11:15:20.252963    8508 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 17 10:46 /usr/share/ca-certificates/8940.pem
	I0317 11:15:20.263775    8508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8940.pem
	I0317 11:15:20.284998    8508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8940.pem /etc/ssl/certs/51391683.0"
	I0317 11:15:20.318558    8508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/89402.pem && ln -fs /usr/share/ca-certificates/89402.pem /etc/ssl/certs/89402.pem"
	I0317 11:15:20.351183    8508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/89402.pem
	I0317 11:15:20.359028    8508 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 17 10:46 /usr/share/ca-certificates/89402.pem
	I0317 11:15:20.372308    8508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/89402.pem
	I0317 11:15:20.394494    8508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/89402.pem /etc/ssl/certs/3ec20f2e.0"
	I0317 11:15:20.427544    8508 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0317 11:15:20.434494    8508 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0317 11:15:20.434494    8508 kubeadm.go:934] updating node {m03 172.25.19.102 8443 v1.32.2 docker true true} ...
	I0317 11:15:20.435029    8508 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-450500-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.19.102
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:ha-450500 Namespace:default APIServerHAVIP:172.25.31.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0317 11:15:20.435079    8508 kube-vip.go:115] generating kube-vip config ...
	I0317 11:15:20.446700    8508 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0317 11:15:20.477823    8508 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0317 11:15:20.477823    8508 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.25.31.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.9
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0317 11:15:20.490226    8508 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0317 11:15:20.508399    8508 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.32.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.32.2': No such file or directory
	
	Initiating transfer...
	I0317 11:15:20.520911    8508 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.32.2
	I0317 11:15:20.538872    8508 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet.sha256
	I0317 11:15:20.538872    8508 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256
	I0317 11:15:20.538872    8508 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256
	I0317 11:15:20.539685    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubectl -> /var/lib/minikube/binaries/v1.32.2/kubectl
	I0317 11:15:20.540011    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubeadm -> /var/lib/minikube/binaries/v1.32.2/kubeadm
	I0317 11:15:20.554737    8508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0317 11:15:20.554737    8508 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl
	I0317 11:15:20.554737    8508 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubeadm
	I0317 11:15:20.579481    8508 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubectl': No such file or directory
	I0317 11:15:20.579540    8508 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubeadm': No such file or directory
	I0317 11:15:20.579665    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubectl --> /var/lib/minikube/binaries/v1.32.2/kubectl (57323672 bytes)
	I0317 11:15:20.579792    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubeadm --> /var/lib/minikube/binaries/v1.32.2/kubeadm (70942872 bytes)
	I0317 11:15:20.579953    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubelet -> /var/lib/minikube/binaries/v1.32.2/kubelet
	I0317 11:15:20.591987    8508 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet
	I0317 11:15:20.634302    8508 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubelet': No such file or directory
	I0317 11:15:20.634357    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubelet --> /var/lib/minikube/binaries/v1.32.2/kubelet (77406468 bytes)
	I0317 11:15:22.025520    8508 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0317 11:15:22.051106    8508 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0317 11:15:22.086097    8508 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0317 11:15:22.121756    8508 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0317 11:15:22.170554    8508 ssh_runner.go:195] Run: grep 172.25.31.254	control-plane.minikube.internal$ /etc/hosts
	I0317 11:15:22.177167    8508 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.25.31.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 11:15:22.212326    8508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 11:15:22.430849    8508 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 11:15:22.462141    8508 host.go:66] Checking if "ha-450500" exists ...
	I0317 11:15:22.541128    8508 start.go:317] joinCluster: &{Name:ha-450500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-450500 Namespace:def
ault APIServerHAVIP:172.25.31.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.16.34 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.25.21.189 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.25.19.102 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadge
t:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 11:15:22.541441    8508 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0317 11:15:22.541526    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500 ).state
	I0317 11:15:24.742855    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:15:24.742855    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:15:24.743509    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500 ).networkadapters[0]).ipaddresses[0]
	I0317 11:15:27.331280    8508 main.go:141] libmachine: [stdout =====>] : 172.25.16.34
	
	I0317 11:15:27.331280    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:15:27.331280    8508 sshutil.go:53] new ssh client: &{IP:172.25.16.34 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500\id_rsa Username:docker}
	I0317 11:15:27.523577    8508 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm token create --print-join-command --ttl=0": (4.982098s)
	I0317 11:15:27.523577    8508 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.25.19.102 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0317 11:15:27.523577    8508 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 6yyl46.a7oj7eb2wz8mbr99 --discovery-token-ca-cert-hash sha256:c322b0259bb8a6b4c6c1dc77ade13bbf0d2f6b9bd2605c58fcd3743199330256 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-450500-m03 --control-plane --apiserver-advertise-address=172.25.19.102 --apiserver-bind-port=8443"
	I0317 11:16:09.800170    8508 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 6yyl46.a7oj7eb2wz8mbr99 --discovery-token-ca-cert-hash sha256:c322b0259bb8a6b4c6c1dc77ade13bbf0d2f6b9bd2605c58fcd3743199330256 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-450500-m03 --control-plane --apiserver-advertise-address=172.25.19.102 --apiserver-bind-port=8443": (42.2762667s)
	I0317 11:16:09.800170    8508 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0317 11:16:10.848630    8508 ssh_runner.go:235] Completed: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet": (1.0484519s)
	I0317 11:16:10.861795    8508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-450500-m03 minikube.k8s.io/updated_at=2025_03_17T11_16_10_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=28b3ce799b018a38b7c40f89b465976263272e76 minikube.k8s.io/name=ha-450500 minikube.k8s.io/primary=false
	I0317 11:16:11.091287    8508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-450500-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0317 11:16:11.298616    8508 start.go:319] duration metric: took 48.7571112s to joinCluster
	I0317 11:16:11.298856    8508 start.go:235] Will wait 6m0s for node &{Name:m03 IP:172.25.19.102 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0317 11:16:11.300313    8508 config.go:182] Loaded profile config "ha-450500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0317 11:16:11.302139    8508 out.go:177] * Verifying Kubernetes components...
	I0317 11:16:11.322778    8508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 11:16:11.830977    8508 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 11:16:11.882935    8508 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0317 11:16:11.884247    8508 kapi.go:59] client config for ha-450500: &rest.Config{Host:"https://172.25.31.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-450500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-450500\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2e292e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0317 11:16:11.884247    8508 kubeadm.go:483] Overriding stale ClientConfig host https://172.25.31.254:8443 with https://172.25.16.34:8443
	I0317 11:16:11.885758    8508 node_ready.go:35] waiting up to 6m0s for node "ha-450500-m03" to be "Ready" ...
	I0317 11:16:11.886004    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:11.886028    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:11.886028    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:11.886028    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:11.903592    8508 round_trippers.go:581] Response Status: 200 OK in 16 milliseconds
	I0317 11:16:12.387334    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:12.387334    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:12.387334    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:12.387334    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:12.393585    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:16:12.887531    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:12.887531    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:12.887531    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:12.887531    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:12.901476    8508 round_trippers.go:581] Response Status: 200 OK in 13 milliseconds
	I0317 11:16:13.386852    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:13.386852    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:13.386852    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:13.386852    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:13.391730    8508 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 11:16:13.886586    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:13.886586    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:13.886586    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:13.886586    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:13.893048    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:16:13.893048    8508 node_ready.go:53] node "ha-450500-m03" has status "Ready":"False"
	I0317 11:16:14.386817    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:14.386817    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:14.386817    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:14.386817    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:14.468858    8508 round_trippers.go:581] Response Status: 200 OK in 82 milliseconds
	I0317 11:16:14.886405    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:14.886405    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:14.886405    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:14.886405    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:14.891748    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:16:15.385977    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:15.386552    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:15.386552    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:15.386552    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:15.391295    8508 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 11:16:15.886719    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:15.886719    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:15.886719    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:15.886719    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:15.892174    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:16:16.387104    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:16.387203    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:16.387203    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:16.387203    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:16.475472    8508 round_trippers.go:581] Response Status: 200 OK in 88 milliseconds
	I0317 11:16:16.475472    8508 node_ready.go:53] node "ha-450500-m03" has status "Ready":"False"
	I0317 11:16:16.886123    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:16.886123    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:16.886123    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:16.886123    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:16.891788    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:16:17.386941    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:17.386941    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:17.386941    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:17.386941    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:17.394088    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:16:17.886451    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:17.886451    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:17.886451    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:17.886451    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:17.893328    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:16:18.385878    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:18.385878    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:18.385878    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:18.385878    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:18.395417    8508 round_trippers.go:581] Response Status: 200 OK in 9 milliseconds
	I0317 11:16:18.886536    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:18.886536    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:18.886536    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:18.886536    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:18.893985    8508 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0317 11:16:18.894968    8508 node_ready.go:53] node "ha-450500-m03" has status "Ready":"False"
	I0317 11:16:19.387318    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:19.387436    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:19.387436    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:19.387436    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:19.391529    8508 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 11:16:19.886246    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:19.886246    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:19.886246    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:19.886246    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:19.892563    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:16:20.386019    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:20.386019    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:20.386019    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:20.386019    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:20.390740    8508 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 11:16:20.886317    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:20.886317    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:20.886317    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:20.886317    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:20.891455    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:16:21.387211    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:21.387321    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:21.387321    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:21.387321    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:21.392614    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:16:21.393026    8508 node_ready.go:53] node "ha-450500-m03" has status "Ready":"False"
	I0317 11:16:21.886219    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:21.886219    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:21.886219    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:21.886219    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:21.895291    8508 round_trippers.go:581] Response Status: 200 OK in 9 milliseconds
	I0317 11:16:22.386927    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:22.386927    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:22.386927    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:22.386927    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:22.392583    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:16:22.887166    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:22.887219    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:22.887219    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:22.887219    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:22.893309    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:16:23.387394    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:23.387394    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:23.387394    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:23.387394    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:23.399696    8508 round_trippers.go:581] Response Status: 200 OK in 12 milliseconds
	I0317 11:16:23.400321    8508 node_ready.go:53] node "ha-450500-m03" has status "Ready":"False"
	I0317 11:16:23.885990    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:23.885990    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:23.885990    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:23.885990    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:23.891923    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:16:24.386274    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:24.386274    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:24.386274    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:24.386274    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:24.392867    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:16:24.886675    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:24.886675    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:24.886675    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:24.886675    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:24.891639    8508 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 11:16:25.386476    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:25.386476    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:25.386476    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:25.386476    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:25.391607    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:16:25.886873    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:25.886873    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:25.887465    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:25.887465    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:25.893666    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:16:25.893890    8508 node_ready.go:53] node "ha-450500-m03" has status "Ready":"False"
	I0317 11:16:26.386514    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:26.386514    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:26.386514    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:26.386514    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:26.391915    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:16:26.886674    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:26.886674    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:26.886674    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:26.886674    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:26.892406    8508 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 11:16:27.386705    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:27.386705    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:27.386705    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:27.386705    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:27.391834    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:16:27.886317    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:27.886317    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:27.886317    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:27.886317    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:27.892627    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:16:28.386600    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:28.386600    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:28.386600    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:28.386600    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:28.392830    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:16:28.393020    8508 node_ready.go:53] node "ha-450500-m03" has status "Ready":"False"
	I0317 11:16:28.887122    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:28.887122    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:28.887122    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:28.887122    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:28.900264    8508 round_trippers.go:581] Response Status: 200 OK in 13 milliseconds
	I0317 11:16:29.386651    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:29.386651    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:29.386651    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:29.386651    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:29.393241    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:16:29.886583    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:29.886583    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:29.886583    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:29.886583    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:29.902603    8508 round_trippers.go:581] Response Status: 200 OK in 16 milliseconds
	I0317 11:16:30.386446    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:30.386498    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:30.386498    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:30.386585    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:30.403914    8508 round_trippers.go:581] Response Status: 200 OK in 17 milliseconds
	I0317 11:16:30.404142    8508 node_ready.go:53] node "ha-450500-m03" has status "Ready":"False"
	I0317 11:16:30.886312    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:30.886312    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:30.886312    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:30.886312    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:30.892530    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:16:31.386592    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:31.386592    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:31.386592    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:31.386592    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:31.392309    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:16:31.887272    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:31.887272    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:31.887418    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:31.887418    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:31.900494    8508 round_trippers.go:581] Response Status: 200 OK in 13 milliseconds
	I0317 11:16:31.900904    8508 node_ready.go:49] node "ha-450500-m03" has status "Ready":"True"
	I0317 11:16:31.900904    8508 node_ready.go:38] duration metric: took 20.0149314s for node "ha-450500-m03" to be "Ready" ...
	I0317 11:16:31.900904    8508 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 11:16:31.901091    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods
	I0317 11:16:31.901160    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:31.901160    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:31.901160    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:31.909462    8508 round_trippers.go:581] Response Status: 200 OK in 8 milliseconds
	I0317 11:16:31.913224    8508 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-qd2nj" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:31.913290    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-qd2nj
	I0317 11:16:31.913290    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:31.913419    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:31.913419    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:31.925538    8508 round_trippers.go:581] Response Status: 200 OK in 11 milliseconds
	I0317 11:16:31.926010    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500
	I0317 11:16:31.926072    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:31.926141    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:31.926304    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:31.932023    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:16:31.932023    8508 pod_ready.go:93] pod "coredns-668d6bf9bc-qd2nj" in "kube-system" namespace has status "Ready":"True"
	I0317 11:16:31.932023    8508 pod_ready.go:82] duration metric: took 18.7325ms for pod "coredns-668d6bf9bc-qd2nj" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:31.932023    8508 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-rhhkv" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:31.932023    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-rhhkv
	I0317 11:16:31.932023    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:31.932023    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:31.932023    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:31.936805    8508 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 11:16:31.936805    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500
	I0317 11:16:31.936805    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:31.936805    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:31.936805    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:31.941339    8508 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 11:16:31.941628    8508 pod_ready.go:93] pod "coredns-668d6bf9bc-rhhkv" in "kube-system" namespace has status "Ready":"True"
	I0317 11:16:31.941687    8508 pod_ready.go:82] duration metric: took 9.6644ms for pod "coredns-668d6bf9bc-rhhkv" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:31.941687    8508 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-450500" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:31.941839    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/etcd-ha-450500
	I0317 11:16:31.941855    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:31.941855    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:31.941855    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:31.945778    8508 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0317 11:16:31.945849    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500
	I0317 11:16:31.945849    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:31.945849    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:31.945849    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:31.948565    8508 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0317 11:16:31.949812    8508 pod_ready.go:93] pod "etcd-ha-450500" in "kube-system" namespace has status "Ready":"True"
	I0317 11:16:31.949881    8508 pod_ready.go:82] duration metric: took 8.194ms for pod "etcd-ha-450500" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:31.949881    8508 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-450500-m02" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:31.949990    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/etcd-ha-450500-m02
	I0317 11:16:31.950042    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:31.950042    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:31.950082    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:31.955371    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:16:31.955965    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:16:31.956013    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:31.956013    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:31.956044    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:31.959317    8508 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0317 11:16:31.959317    8508 pod_ready.go:93] pod "etcd-ha-450500-m02" in "kube-system" namespace has status "Ready":"True"
	I0317 11:16:31.959317    8508 pod_ready.go:82] duration metric: took 9.4359ms for pod "etcd-ha-450500-m02" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:31.959317    8508 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-450500-m03" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:32.088255    8508 request.go:661] Waited for 128.9363ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/etcd-ha-450500-m03
	I0317 11:16:32.088255    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/etcd-ha-450500-m03
	I0317 11:16:32.088255    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:32.088255    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:32.088255    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:32.095739    8508 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0317 11:16:32.287665    8508 request.go:661] Waited for 191.3367ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:32.287665    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:32.287665    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:32.287665    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:32.287665    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:32.299344    8508 round_trippers.go:581] Response Status: 200 OK in 11 milliseconds
	I0317 11:16:32.299344    8508 pod_ready.go:93] pod "etcd-ha-450500-m03" in "kube-system" namespace has status "Ready":"True"
	I0317 11:16:32.299344    8508 pod_ready.go:82] duration metric: took 340.024ms for pod "etcd-ha-450500-m03" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:32.299344    8508 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-450500" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:32.487244    8508 request.go:661] Waited for 186.6554ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-450500
	I0317 11:16:32.487244    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-450500
	I0317 11:16:32.487244    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:32.487244    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:32.487244    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:32.495674    8508 round_trippers.go:581] Response Status: 200 OK in 8 milliseconds
	I0317 11:16:32.688006    8508 request.go:661] Waited for 192.3307ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/nodes/ha-450500
	I0317 11:16:32.688006    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500
	I0317 11:16:32.688006    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:32.688006    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:32.688006    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:32.695816    8508 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0317 11:16:32.696361    8508 pod_ready.go:93] pod "kube-apiserver-ha-450500" in "kube-system" namespace has status "Ready":"True"
	I0317 11:16:32.696361    8508 pod_ready.go:82] duration metric: took 397.0145ms for pod "kube-apiserver-ha-450500" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:32.696361    8508 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-450500-m02" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:32.888084    8508 request.go:661] Waited for 191.505ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-450500-m02
	I0317 11:16:32.888084    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-450500-m02
	I0317 11:16:32.888084    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:32.888084    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:32.888084    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:32.894542    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:16:33.087450    8508 request.go:661] Waited for 192.3175ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:16:33.087450    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:16:33.087818    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:33.087818    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:33.087818    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:33.093980    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:16:33.094323    8508 pod_ready.go:93] pod "kube-apiserver-ha-450500-m02" in "kube-system" namespace has status "Ready":"True"
	I0317 11:16:33.094323    8508 pod_ready.go:82] duration metric: took 397.8266ms for pod "kube-apiserver-ha-450500-m02" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:33.094442    8508 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-450500-m03" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:33.287377    8508 request.go:661] Waited for 192.8534ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-450500-m03
	I0317 11:16:33.287377    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-450500-m03
	I0317 11:16:33.287377    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:33.287377    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:33.287377    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:33.293425    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:16:33.488085    8508 request.go:661] Waited for 194.0931ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:33.488085    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:33.488085    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:33.488085    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:33.488085    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:33.493007    8508 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 11:16:33.493929    8508 pod_ready.go:93] pod "kube-apiserver-ha-450500-m03" in "kube-system" namespace has status "Ready":"True"
	I0317 11:16:33.494052    8508 pod_ready.go:82] duration metric: took 399.484ms for pod "kube-apiserver-ha-450500-m03" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:33.494052    8508 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-450500" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:33.687763    8508 request.go:661] Waited for 193.7099ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-450500
	I0317 11:16:33.687763    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-450500
	I0317 11:16:33.687763    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:33.687763    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:33.687763    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:33.693730    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:16:33.887189    8508 request.go:661] Waited for 192.8194ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/nodes/ha-450500
	I0317 11:16:33.887688    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500
	I0317 11:16:33.887755    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:33.887755    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:33.887755    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:33.893329    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:16:33.893754    8508 pod_ready.go:93] pod "kube-controller-manager-ha-450500" in "kube-system" namespace has status "Ready":"True"
	I0317 11:16:33.893837    8508 pod_ready.go:82] duration metric: took 399.7819ms for pod "kube-controller-manager-ha-450500" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:33.893837    8508 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-450500-m02" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:34.087363    8508 request.go:661] Waited for 193.436ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-450500-m02
	I0317 11:16:34.087363    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-450500-m02
	I0317 11:16:34.087363    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:34.087363    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:34.087363    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:34.095462    8508 round_trippers.go:581] Response Status: 200 OK in 8 milliseconds
	I0317 11:16:34.287197    8508 request.go:661] Waited for 191.1807ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:16:34.287197    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:16:34.287197    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:34.287197    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:34.287197    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:34.293003    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:16:34.293762    8508 pod_ready.go:93] pod "kube-controller-manager-ha-450500-m02" in "kube-system" namespace has status "Ready":"True"
	I0317 11:16:34.293843    8508 pod_ready.go:82] duration metric: took 400.0032ms for pod "kube-controller-manager-ha-450500-m02" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:34.293843    8508 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-450500-m03" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:34.487178    8508 request.go:661] Waited for 193.2463ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-450500-m03
	I0317 11:16:34.487178    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-450500-m03
	I0317 11:16:34.487178    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:34.487178    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:34.487178    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:34.493003    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:16:34.687719    8508 request.go:661] Waited for 194.1884ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:34.688166    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:34.688230    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:34.688230    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:34.688230    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:34.694427    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:16:34.694980    8508 pod_ready.go:93] pod "kube-controller-manager-ha-450500-m03" in "kube-system" namespace has status "Ready":"True"
	I0317 11:16:34.694980    8508 pod_ready.go:82] duration metric: took 401.1338ms for pod "kube-controller-manager-ha-450500-m03" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:34.694980    8508 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fthkw" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:34.887883    8508 request.go:661] Waited for 192.7972ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fthkw
	I0317 11:16:34.887883    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fthkw
	I0317 11:16:34.887883    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:34.887883    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:34.887883    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:34.893669    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:16:35.088086    8508 request.go:661] Waited for 193.8774ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:16:35.088086    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:16:35.088086    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:35.088086    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:35.088086    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:35.095339    8508 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0317 11:16:35.095844    8508 pod_ready.go:93] pod "kube-proxy-fthkw" in "kube-system" namespace has status "Ready":"True"
	I0317 11:16:35.095844    8508 pod_ready.go:82] duration metric: took 400.7573ms for pod "kube-proxy-fthkw" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:35.095844    8508 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-jzvxr" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:35.287641    8508 request.go:661] Waited for 191.6027ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jzvxr
	I0317 11:16:35.287641    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jzvxr
	I0317 11:16:35.287641    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:35.288146    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:35.288146    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:35.293351    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:16:35.487876    8508 request.go:661] Waited for 194.0706ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/nodes/ha-450500
	I0317 11:16:35.487876    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500
	I0317 11:16:35.488460    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:35.488460    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:35.488460    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:35.492905    8508 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 11:16:35.494059    8508 pod_ready.go:93] pod "kube-proxy-jzvxr" in "kube-system" namespace has status "Ready":"True"
	I0317 11:16:35.494059    8508 pod_ready.go:82] duration metric: took 398.2116ms for pod "kube-proxy-jzvxr" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:35.494059    8508 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ktktm" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:35.687715    8508 request.go:661] Waited for 193.6539ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ktktm
	I0317 11:16:35.687715    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ktktm
	I0317 11:16:35.687715    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:35.687715    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:35.687715    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:35.694200    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:16:35.887633    8508 request.go:661] Waited for 192.7209ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:35.887633    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:35.888210    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:35.888210    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:35.888210    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:35.892941    8508 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 11:16:35.895085    8508 pod_ready.go:93] pod "kube-proxy-ktktm" in "kube-system" namespace has status "Ready":"True"
	I0317 11:16:35.895085    8508 pod_ready.go:82] duration metric: took 401.0231ms for pod "kube-proxy-ktktm" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:35.895144    8508 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-450500" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:36.088002    8508 request.go:661] Waited for 192.764ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-450500
	I0317 11:16:36.088002    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-450500
	I0317 11:16:36.088002    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:36.088002    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:36.088002    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:36.094659    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:16:36.287520    8508 request.go:661] Waited for 192.1951ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/nodes/ha-450500
	I0317 11:16:36.288335    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500
	I0317 11:16:36.288335    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:36.288335    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:36.288335    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:36.294581    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:16:36.294581    8508 pod_ready.go:93] pod "kube-scheduler-ha-450500" in "kube-system" namespace has status "Ready":"True"
	I0317 11:16:36.294581    8508 pod_ready.go:82] duration metric: took 399.4341ms for pod "kube-scheduler-ha-450500" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:36.294581    8508 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-450500-m02" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:36.488440    8508 request.go:661] Waited for 193.8577ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-450500-m02
	I0317 11:16:36.488440    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-450500-m02
	I0317 11:16:36.488440    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:36.488440    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:36.488440    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:36.495062    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:16:36.687645    8508 request.go:661] Waited for 191.3673ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:16:36.687645    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:16:36.687645    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:36.687645    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:36.687645    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:36.694499    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:16:36.694825    8508 pod_ready.go:93] pod "kube-scheduler-ha-450500-m02" in "kube-system" namespace has status "Ready":"True"
	I0317 11:16:36.694950    8508 pod_ready.go:82] duration metric: took 400.241ms for pod "kube-scheduler-ha-450500-m02" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:36.694950    8508 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-450500-m03" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:36.888199    8508 request.go:661] Waited for 193.247ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-450500-m03
	I0317 11:16:36.888199    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-450500-m03
	I0317 11:16:36.888199    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:36.888199    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:36.888199    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:36.895067    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:16:37.087623    8508 request.go:661] Waited for 192.0996ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:37.087623    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:37.088122    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:37.088163    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:37.088163    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:37.096672    8508 round_trippers.go:581] Response Status: 200 OK in 8 milliseconds
	I0317 11:16:37.097577    8508 pod_ready.go:93] pod "kube-scheduler-ha-450500-m03" in "kube-system" namespace has status "Ready":"True"
	I0317 11:16:37.097695    8508 pod_ready.go:82] duration metric: took 402.7418ms for pod "kube-scheduler-ha-450500-m03" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:37.097695    8508 pod_ready.go:39] duration metric: took 5.1966538s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 11:16:37.097829    8508 api_server.go:52] waiting for apiserver process to appear ...
	I0317 11:16:37.109517    8508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 11:16:37.139738    8508 api_server.go:72] duration metric: took 25.8406808s to wait for apiserver process to appear ...
	I0317 11:16:37.139738    8508 api_server.go:88] waiting for apiserver healthz status ...
	I0317 11:16:37.139738    8508 api_server.go:253] Checking apiserver healthz at https://172.25.16.34:8443/healthz ...
	I0317 11:16:37.147832    8508 api_server.go:279] https://172.25.16.34:8443/healthz returned 200:
	ok
	I0317 11:16:37.147991    8508 round_trippers.go:470] GET https://172.25.16.34:8443/version
	I0317 11:16:37.148015    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:37.148015    8508 round_trippers.go:480]     Accept: application/json, */*
	I0317 11:16:37.148015    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:37.148675    8508 round_trippers.go:581] Response Status: 200 OK in 0 milliseconds
	I0317 11:16:37.149742    8508 api_server.go:141] control plane version: v1.32.2
	I0317 11:16:37.149933    8508 api_server.go:131] duration metric: took 10.1947ms to wait for apiserver health ...
	I0317 11:16:37.149933    8508 system_pods.go:43] waiting for kube-system pods to appear ...
	I0317 11:16:37.288150    8508 request.go:661] Waited for 138.0872ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods
	I0317 11:16:37.288150    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods
	I0317 11:16:37.288150    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:37.288150    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:37.288150    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:37.294610    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:16:37.298851    8508 system_pods.go:59] 24 kube-system pods found
	I0317 11:16:37.298851    8508 system_pods.go:61] "coredns-668d6bf9bc-qd2nj" [1f982191-c45a-4681-907d-a0d9220b1f77] Running
	I0317 11:16:37.298851    8508 system_pods.go:61] "coredns-668d6bf9bc-rhhkv" [0dc113a4-430f-4c5b-bc05-05d8cc014ed7] Running
	I0317 11:16:37.298851    8508 system_pods.go:61] "etcd-ha-450500" [7a735c5c-89ec-488d-95c8-f7fa1160fa3c] Running
	I0317 11:16:37.298851    8508 system_pods.go:61] "etcd-ha-450500-m02" [78234926-bd4c-41f7-9f48-43e2bcd543a4] Running
	I0317 11:16:37.298851    8508 system_pods.go:61] "etcd-ha-450500-m03" [c18e3ae6-30ed-44d8-8c4a-5dad20e962f9] Running
	I0317 11:16:37.298851    8508 system_pods.go:61] "kindnet-94r58" [4b18e7c6-4105-4037-8742-d58ad9eda200] Running
	I0317 11:16:37.298851    8508 system_pods.go:61] "kindnet-ch8f7" [6247c683-e723-4b72-b373-89cb4f1b576d] Running
	I0317 11:16:37.298851    8508 system_pods.go:61] "kindnet-prwhr" [0f7a825d-bd7c-4428-8685-cfa8926ef827] Running
	I0317 11:16:37.298851    8508 system_pods.go:61] "kube-apiserver-ha-450500" [232d0746-fa49-4d17-b36e-557164865a8f] Running
	I0317 11:16:37.298851    8508 system_pods.go:61] "kube-apiserver-ha-450500-m02" [a445c598-90ff-4e54-a96f-3d206a54a108] Running
	I0317 11:16:37.298851    8508 system_pods.go:61] "kube-apiserver-ha-450500-m03" [96fcebba-cad8-4023-b7f7-08ac83263448] Running
	I0317 11:16:37.298851    8508 system_pods.go:61] "kube-controller-manager-ha-450500" [afff30e2-5638-4b0f-bbe0-ec65ff25eef4] Running
	I0317 11:16:37.298851    8508 system_pods.go:61] "kube-controller-manager-ha-450500-m02" [e931480a-7ff2-4264-bef5-ff129c603e77] Running
	I0317 11:16:37.298851    8508 system_pods.go:61] "kube-controller-manager-ha-450500-m03" [cf875e54-a5d1-48bb-906b-f0be64a0d579] Running
	I0317 11:16:37.298851    8508 system_pods.go:61] "kube-proxy-fthkw" [8a9b1cd2-2eb8-49ac-8cc5-df138d6d0670] Running
	I0317 11:16:37.298851    8508 system_pods.go:61] "kube-proxy-jzvxr" [eeae069e-8b5a-449e-9fe2-ecafcd9733eb] Running
	I0317 11:16:37.298851    8508 system_pods.go:61] "kube-proxy-ktktm" [2900bbaf-f433-41ca-a7f2-8491834c1c3d] Running
	I0317 11:16:37.298851    8508 system_pods.go:61] "kube-scheduler-ha-450500" [66d11ee5-babc-401c-bf3f-8bd94eb09e06] Running
	I0317 11:16:37.298851    8508 system_pods.go:61] "kube-scheduler-ha-450500-m02" [053f7a43-bff9-4dc7-8d8d-126f265dfbed] Running
	I0317 11:16:37.298851    8508 system_pods.go:61] "kube-scheduler-ha-450500-m03" [ee967b5b-f00f-4680-beec-6d938e449577] Running
	I0317 11:16:37.298851    8508 system_pods.go:61] "kube-vip-ha-450500" [55e8c247-e66e-47b2-b766-ead538ec0b9a] Running
	I0317 11:16:37.298851    8508 system_pods.go:61] "kube-vip-ha-450500-m02" [df1997ea-6b13-4caa-a041-2c63da793276] Running
	I0317 11:16:37.298851    8508 system_pods.go:61] "kube-vip-ha-450500-m03" [4153bf28-7e36-4ff9-9ddd-334201353a29] Running
	I0317 11:16:37.298851    8508 system_pods.go:61] "storage-provisioner" [b1a4725e-cedf-428a-a320-59d7374cba0d] Running
	I0317 11:16:37.298851    8508 system_pods.go:74] duration metric: took 148.9171ms to wait for pod list to return data ...
	I0317 11:16:37.298851    8508 default_sa.go:34] waiting for default service account to be created ...
	I0317 11:16:37.487397    8508 request.go:661] Waited for 187.5606ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/namespaces/default/serviceaccounts
	I0317 11:16:37.487397    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/default/serviceaccounts
	I0317 11:16:37.487397    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:37.487397    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:37.487397    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:37.492424    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:16:37.493202    8508 default_sa.go:45] found service account: "default"
	I0317 11:16:37.493202    8508 default_sa.go:55] duration metric: took 194.3496ms for default service account to be created ...
	I0317 11:16:37.493411    8508 system_pods.go:116] waiting for k8s-apps to be running ...
	I0317 11:16:37.688075    8508 request.go:661] Waited for 194.6258ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods
	I0317 11:16:37.688732    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods
	I0317 11:16:37.688732    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:37.688732    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:37.688934    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:37.695994    8508 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0317 11:16:37.698469    8508 system_pods.go:86] 24 kube-system pods found
	I0317 11:16:37.698568    8508 system_pods.go:89] "coredns-668d6bf9bc-qd2nj" [1f982191-c45a-4681-907d-a0d9220b1f77] Running
	I0317 11:16:37.698568    8508 system_pods.go:89] "coredns-668d6bf9bc-rhhkv" [0dc113a4-430f-4c5b-bc05-05d8cc014ed7] Running
	I0317 11:16:37.698568    8508 system_pods.go:89] "etcd-ha-450500" [7a735c5c-89ec-488d-95c8-f7fa1160fa3c] Running
	I0317 11:16:37.698568    8508 system_pods.go:89] "etcd-ha-450500-m02" [78234926-bd4c-41f7-9f48-43e2bcd543a4] Running
	I0317 11:16:37.698568    8508 system_pods.go:89] "etcd-ha-450500-m03" [c18e3ae6-30ed-44d8-8c4a-5dad20e962f9] Running
	I0317 11:16:37.698568    8508 system_pods.go:89] "kindnet-94r58" [4b18e7c6-4105-4037-8742-d58ad9eda200] Running
	I0317 11:16:37.698568    8508 system_pods.go:89] "kindnet-ch8f7" [6247c683-e723-4b72-b373-89cb4f1b576d] Running
	I0317 11:16:37.698568    8508 system_pods.go:89] "kindnet-prwhr" [0f7a825d-bd7c-4428-8685-cfa8926ef827] Running
	I0317 11:16:37.698716    8508 system_pods.go:89] "kube-apiserver-ha-450500" [232d0746-fa49-4d17-b36e-557164865a8f] Running
	I0317 11:16:37.698716    8508 system_pods.go:89] "kube-apiserver-ha-450500-m02" [a445c598-90ff-4e54-a96f-3d206a54a108] Running
	I0317 11:16:37.698716    8508 system_pods.go:89] "kube-apiserver-ha-450500-m03" [96fcebba-cad8-4023-b7f7-08ac83263448] Running
	I0317 11:16:37.698716    8508 system_pods.go:89] "kube-controller-manager-ha-450500" [afff30e2-5638-4b0f-bbe0-ec65ff25eef4] Running
	I0317 11:16:37.698822    8508 system_pods.go:89] "kube-controller-manager-ha-450500-m02" [e931480a-7ff2-4264-bef5-ff129c603e77] Running
	I0317 11:16:37.698822    8508 system_pods.go:89] "kube-controller-manager-ha-450500-m03" [cf875e54-a5d1-48bb-906b-f0be64a0d579] Running
	I0317 11:16:37.698822    8508 system_pods.go:89] "kube-proxy-fthkw" [8a9b1cd2-2eb8-49ac-8cc5-df138d6d0670] Running
	I0317 11:16:37.698822    8508 system_pods.go:89] "kube-proxy-jzvxr" [eeae069e-8b5a-449e-9fe2-ecafcd9733eb] Running
	I0317 11:16:37.698822    8508 system_pods.go:89] "kube-proxy-ktktm" [2900bbaf-f433-41ca-a7f2-8491834c1c3d] Running
	I0317 11:16:37.698937    8508 system_pods.go:89] "kube-scheduler-ha-450500" [66d11ee5-babc-401c-bf3f-8bd94eb09e06] Running
	I0317 11:16:37.698937    8508 system_pods.go:89] "kube-scheduler-ha-450500-m02" [053f7a43-bff9-4dc7-8d8d-126f265dfbed] Running
	I0317 11:16:37.698937    8508 system_pods.go:89] "kube-scheduler-ha-450500-m03" [ee967b5b-f00f-4680-beec-6d938e449577] Running
	I0317 11:16:37.698937    8508 system_pods.go:89] "kube-vip-ha-450500" [55e8c247-e66e-47b2-b766-ead538ec0b9a] Running
	I0317 11:16:37.698937    8508 system_pods.go:89] "kube-vip-ha-450500-m02" [df1997ea-6b13-4caa-a041-2c63da793276] Running
	I0317 11:16:37.698937    8508 system_pods.go:89] "kube-vip-ha-450500-m03" [4153bf28-7e36-4ff9-9ddd-334201353a29] Running
	I0317 11:16:37.699042    8508 system_pods.go:89] "storage-provisioner" [b1a4725e-cedf-428a-a320-59d7374cba0d] Running
	I0317 11:16:37.699042    8508 system_pods.go:126] duration metric: took 205.6293ms to wait for k8s-apps to be running ...
	I0317 11:16:37.699042    8508 system_svc.go:44] waiting for kubelet service to be running ....
	I0317 11:16:37.712405    8508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0317 11:16:37.736979    8508 system_svc.go:56] duration metric: took 37.937ms WaitForService to wait for kubelet
	I0317 11:16:37.736979    8508 kubeadm.go:582] duration metric: took 26.437917s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0317 11:16:37.736979    8508 node_conditions.go:102] verifying NodePressure condition ...
	I0317 11:16:37.888682    8508 request.go:661] Waited for 151.7019ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/nodes
	I0317 11:16:37.888682    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes
	I0317 11:16:37.888682    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:37.888682    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:37.888682    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:37.894930    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:16:37.894930    8508 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0317 11:16:37.894930    8508 node_conditions.go:123] node cpu capacity is 2
	I0317 11:16:37.894930    8508 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0317 11:16:37.894930    8508 node_conditions.go:123] node cpu capacity is 2
	I0317 11:16:37.894930    8508 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0317 11:16:37.895482    8508 node_conditions.go:123] node cpu capacity is 2
	I0317 11:16:37.895482    8508 node_conditions.go:105] duration metric: took 158.5014ms to run NodePressure ...
	I0317 11:16:37.895482    8508 start.go:241] waiting for startup goroutines ...
	I0317 11:16:37.895556    8508 start.go:255] writing updated cluster config ...
	I0317 11:16:37.907575    8508 ssh_runner.go:195] Run: rm -f paused
	I0317 11:16:38.055753    8508 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0317 11:16:38.060861    8508 out.go:177] * Done! kubectl is now configured to use "ha-450500" cluster and "default" namespace by default
	
	
	==> Docker <==
	Mar 17 11:08:54 ha-450500 cri-dockerd[1348]: time="2025-03-17T11:08:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a48ca0bd821650855c3c1b374387b1c204e09ce396b8392b93b3b1d1fede54ec/resolv.conf as [nameserver 172.25.16.1]"
	Mar 17 11:08:54 ha-450500 cri-dockerd[1348]: time="2025-03-17T11:08:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/65fd3bc7770e1e6f1a34dd73c3fcb5263502b371e093fff8b9cc592c1f36c9a0/resolv.conf as [nameserver 172.25.16.1]"
	Mar 17 11:08:54 ha-450500 cri-dockerd[1348]: time="2025-03-17T11:08:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b54c1f80fa563c0fad7d88ba3b16ab3cef5d1eab286511fe3d8a68198abbab03/resolv.conf as [nameserver 172.25.16.1]"
	Mar 17 11:08:55 ha-450500 dockerd[1454]: time="2025-03-17T11:08:55.100638769Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 17 11:08:55 ha-450500 dockerd[1454]: time="2025-03-17T11:08:55.100906369Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 17 11:08:55 ha-450500 dockerd[1454]: time="2025-03-17T11:08:55.100935769Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 17 11:08:55 ha-450500 dockerd[1454]: time="2025-03-17T11:08:55.101271569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 17 11:08:55 ha-450500 dockerd[1454]: time="2025-03-17T11:08:55.211537851Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 17 11:08:55 ha-450500 dockerd[1454]: time="2025-03-17T11:08:55.211627651Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 17 11:08:55 ha-450500 dockerd[1454]: time="2025-03-17T11:08:55.211642551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 17 11:08:55 ha-450500 dockerd[1454]: time="2025-03-17T11:08:55.211743251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 17 11:08:55 ha-450500 dockerd[1454]: time="2025-03-17T11:08:55.272664697Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 17 11:08:55 ha-450500 dockerd[1454]: time="2025-03-17T11:08:55.272793597Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 17 11:08:55 ha-450500 dockerd[1454]: time="2025-03-17T11:08:55.272830497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 17 11:08:55 ha-450500 dockerd[1454]: time="2025-03-17T11:08:55.273571297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 17 11:17:16 ha-450500 dockerd[1454]: time="2025-03-17T11:17:16.709348420Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 17 11:17:16 ha-450500 dockerd[1454]: time="2025-03-17T11:17:16.709519821Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 17 11:17:16 ha-450500 dockerd[1454]: time="2025-03-17T11:17:16.710228026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 17 11:17:16 ha-450500 dockerd[1454]: time="2025-03-17T11:17:16.712931146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 17 11:17:16 ha-450500 cri-dockerd[1348]: time="2025-03-17T11:17:16Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7b175ad9e9b3c38a1c5218d48a3b59d2c6c88923d3d1235885b096672f542e7e/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Mar 17 11:17:19 ha-450500 cri-dockerd[1348]: time="2025-03-17T11:17:19Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Mar 17 11:17:20 ha-450500 dockerd[1454]: time="2025-03-17T11:17:20.227331776Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 17 11:17:20 ha-450500 dockerd[1454]: time="2025-03-17T11:17:20.227524978Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 17 11:17:20 ha-450500 dockerd[1454]: time="2025-03-17T11:17:20.227548679Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 17 11:17:20 ha-450500 dockerd[1454]: time="2025-03-17T11:17:20.228548791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	ac5beadc15387       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   About a minute ago   Running             busybox                   0                   7b175ad9e9b3c       busybox-58667487b6-w6ngz
	8b6dc12f0f0ae       c69fa2e9cbf5f                                                                                         9 minutes ago        Running             coredns                   0                   b54c1f80fa563       coredns-668d6bf9bc-qd2nj
	c96833115608b       c69fa2e9cbf5f                                                                                         9 minutes ago        Running             coredns                   0                   a48ca0bd82165       coredns-668d6bf9bc-rhhkv
	bb5aa5f55fea9       6e38f40d628db                                                                                         9 minutes ago        Running             storage-provisioner       0                   65fd3bc7770e1       storage-provisioner
	f00705dba2c6f       kindest/kindnetd@sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495              9 minutes ago        Running             kindnet-cni               0                   f3e6d9f300163       kindnet-prwhr
	fe97a5e85c404       f1332858868e1                                                                                         9 minutes ago        Running             kube-proxy                0                   4436ea277f3d0       kube-proxy-jzvxr
	7409d75987fc7       ghcr.io/kube-vip/kube-vip@sha256:717b8bef2758c10042d64ae7949201ef7f243d928fce265b04e488e844bf9528     10 minutes ago       Running             kube-vip                  0                   f8d96e2c076f1       kube-vip-ha-450500
	b11cf03bfdb6e       a9e7e6b294baf                                                                                         10 minutes ago       Running             etcd                      0                   48c77eb7fa6a3       etcd-ha-450500
	b3f198d2c66ea       85b7a174738ba                                                                                         10 minutes ago       Running             kube-apiserver            0                   fb904acbea4b4       kube-apiserver-ha-450500
	c94d28127c400       b6a454c5a800d                                                                                         10 minutes ago       Running             kube-controller-manager   0                   aeb305ea186a6       kube-controller-manager-ha-450500
	42fa7c58af327       d8e673e7c9983                                                                                         10 minutes ago       Running             kube-scheduler            0                   053e0f10ab0a0       kube-scheduler-ha-450500
	
	
	==> coredns [8b6dc12f0f0a] <==
	[INFO] 10.244.2.2:33333 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.09351468s
	[INFO] 10.244.2.2:36375 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000086301s
	[INFO] 10.244.2.2:55168 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000157302s
	[INFO] 10.244.1.2:33030 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000231303s
	[INFO] 10.244.1.2:33270 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.054625989s
	[INFO] 10.244.1.2:52717 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000213403s
	[INFO] 10.244.1.2:57766 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000219703s
	[INFO] 10.244.1.2:55057 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000208902s
	[INFO] 10.244.1.2:44848 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000097901s
	[INFO] 10.244.0.4:42535 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000103302s
	[INFO] 10.244.0.4:58382 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000334304s
	[INFO] 10.244.0.4:50531 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000198003s
	[INFO] 10.244.0.4:35022 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000252903s
	[INFO] 10.244.2.2:53460 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000084802s
	[INFO] 10.244.2.2:55347 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.117711883s
	[INFO] 10.244.2.2:47928 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000131901s
	[INFO] 10.244.2.2:60937 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000648408s
	[INFO] 10.244.1.2:51522 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000189103s
	[INFO] 10.244.1.2:59967 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000067301s
	[INFO] 10.244.0.4:52392 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000280604s
	[INFO] 10.244.1.2:35122 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123601s
	[INFO] 10.244.1.2:37968 - 5 "PTR IN 1.16.25.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000171002s
	[INFO] 10.244.0.4:37046 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000272104s
	[INFO] 10.244.2.2:47382 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000175203s
	[INFO] 10.244.2.2:35633 - 5 "PTR IN 1.16.25.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000255103s
	
	
	==> coredns [c96833115608] <==
	[INFO] 10.244.1.2:34400 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000153602s
	[INFO] 10.244.0.4:41325 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115602s
	[INFO] 10.244.0.4:39113 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000069801s
	[INFO] 10.244.0.4:48614 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000197902s
	[INFO] 10.244.0.4:36131 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000123701s
	[INFO] 10.244.2.2:60933 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000164402s
	[INFO] 10.244.2.2:43194 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000138602s
	[INFO] 10.244.2.2:44880 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000230302s
	[INFO] 10.244.2.2:41107 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000305204s
	[INFO] 10.244.1.2:37068 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000173802s
	[INFO] 10.244.1.2:47394 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000121202s
	[INFO] 10.244.0.4:54230 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000164902s
	[INFO] 10.244.0.4:60421 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000217603s
	[INFO] 10.244.0.4:41013 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000099902s
	[INFO] 10.244.2.2:34058 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123702s
	[INFO] 10.244.2.2:55260 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000268603s
	[INFO] 10.244.2.2:44536 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000118501s
	[INFO] 10.244.2.2:53382 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000070701s
	[INFO] 10.244.1.2:57646 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000178202s
	[INFO] 10.244.1.2:33199 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000239303s
	[INFO] 10.244.0.4:55568 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145201s
	[INFO] 10.244.0.4:43872 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000246803s
	[INFO] 10.244.0.4:50569 - 5 "PTR IN 1.16.25.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000147402s
	[INFO] 10.244.2.2:35716 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000162402s
	[INFO] 10.244.2.2:40961 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000375505s
	
	
	==> describe nodes <==
	Name:               ha-450500
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-450500
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=28b3ce799b018a38b7c40f89b465976263272e76
	                    minikube.k8s.io/name=ha-450500
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_03_17T11_08_27_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Mar 2025 11:08:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-450500
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Mar 2025 11:18:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Mar 2025 11:17:26 +0000   Mon, 17 Mar 2025 11:08:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Mar 2025 11:17:26 +0000   Mon, 17 Mar 2025 11:08:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Mar 2025 11:17:26 +0000   Mon, 17 Mar 2025 11:08:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Mar 2025 11:17:26 +0000   Mon, 17 Mar 2025 11:08:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.25.16.34
	  Hostname:    ha-450500
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 086b42e2f51840d48852f2b6b010a1c5
	  System UUID:                b88424f0-f6b6-e042-a8c7-9b475f6d85d7
	  Boot ID:                    e0170758-1bda-40b1-bc10-3eb7052a9b72
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-w6ngz             0 (0%)        0 (0%)      0 (0%)           0 (0%)         68s
	  kube-system                 coredns-668d6bf9bc-qd2nj             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m53s
	  kube-system                 coredns-668d6bf9bc-rhhkv             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m53s
	  kube-system                 etcd-ha-450500                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m59s
	  kube-system                 kindnet-prwhr                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      9m53s
	  kube-system                 kube-apiserver-ha-450500             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m59s
	  kube-system                 kube-controller-manager-ha-450500    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m59s
	  kube-system                 kube-proxy-jzvxr                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m53s
	  kube-system                 kube-scheduler-ha-450500             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m59s
	  kube-system                 kube-vip-ha-450500                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m59s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m50s  kube-proxy       
	  Normal  Starting                 9m58s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m57s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m57s  kubelet          Node ha-450500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m57s  kubelet          Node ha-450500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m57s  kubelet          Node ha-450500 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m54s  node-controller  Node ha-450500 event: Registered Node ha-450500 in Controller
	  Normal  NodeReady                9m30s  kubelet          Node ha-450500 status is now: NodeReady
	  Normal  RegisteredNode           6m5s   node-controller  Node ha-450500 event: Registered Node ha-450500 in Controller
	  Normal  RegisteredNode           2m7s   node-controller  Node ha-450500 event: Registered Node ha-450500 in Controller
	
	
	Name:               ha-450500-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-450500-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=28b3ce799b018a38b7c40f89b465976263272e76
	                    minikube.k8s.io/name=ha-450500
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_03_17T11_12_11_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Mar 2025 11:12:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-450500-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Mar 2025 11:18:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Mar 2025 11:17:43 +0000   Mon, 17 Mar 2025 11:12:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Mar 2025 11:17:43 +0000   Mon, 17 Mar 2025 11:12:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Mar 2025 11:17:43 +0000   Mon, 17 Mar 2025 11:12:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Mar 2025 11:17:43 +0000   Mon, 17 Mar 2025 11:12:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.25.21.189
	  Hostname:    ha-450500-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 28f48d510cbb410a972a113bd4575506
	  System UUID:                ae8a8a30-dedf-2944-aa61-0f3914deab55
	  Boot ID:                    4884bf50-6613-47a2-85ee-7ff2ee44b27c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-9977c                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         68s
	  kube-system                 etcd-ha-450500-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m15s
	  kube-system                 kindnet-ch8f7                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m17s
	  kube-system                 kube-apiserver-ha-450500-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m15s
	  kube-system                 kube-controller-manager-ha-450500-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m15s
	  kube-system                 kube-proxy-fthkw                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m17s
	  kube-system                 kube-scheduler-ha-450500-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m15s
	  kube-system                 kube-vip-ha-450500-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m10s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m17s (x8 over 6m17s)  kubelet          Node ha-450500-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m17s (x8 over 6m17s)  kubelet          Node ha-450500-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m17s (x7 over 6m17s)  kubelet          Node ha-450500-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m14s                  node-controller  Node ha-450500-m02 event: Registered Node ha-450500-m02 in Controller
	  Normal  RegisteredNode           6m5s                   node-controller  Node ha-450500-m02 event: Registered Node ha-450500-m02 in Controller
	  Normal  RegisteredNode           2m7s                   node-controller  Node ha-450500-m02 event: Registered Node ha-450500-m02 in Controller
	
	
	Name:               ha-450500-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-450500-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=28b3ce799b018a38b7c40f89b465976263272e76
	                    minikube.k8s.io/name=ha-450500
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_03_17T11_16_10_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Mar 2025 11:16:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-450500-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Mar 2025 11:18:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Mar 2025 11:17:34 +0000   Mon, 17 Mar 2025 11:16:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Mar 2025 11:17:34 +0000   Mon, 17 Mar 2025 11:16:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Mar 2025 11:17:34 +0000   Mon, 17 Mar 2025 11:16:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Mar 2025 11:17:34 +0000   Mon, 17 Mar 2025 11:16:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.25.19.102
	  Hostname:    ha-450500-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 a954376ce9cd4e58ba8799d1ee27e53b
	  System UUID:                0fd182b4-ce08-b24b-a567-bff91ecedb7d
	  Boot ID:                    19342931-4b92-45ba-8dae-537376f25bbf
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-xlpx5                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         68s
	  kube-system                 etcd-ha-450500-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         2m19s
	  kube-system                 kindnet-94r58                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      2m21s
	  kube-system                 kube-apiserver-ha-450500-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 kube-controller-manager-ha-450500-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 kube-proxy-ktktm                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 kube-scheduler-ha-450500-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 kube-vip-ha-450500-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m14s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m21s (x8 over 2m21s)  kubelet          Node ha-450500-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m21s (x8 over 2m21s)  kubelet          Node ha-450500-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m21s (x7 over 2m21s)  kubelet          Node ha-450500-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m20s                  node-controller  Node ha-450500-m03 event: Registered Node ha-450500-m03 in Controller
	  Normal  RegisteredNode           2m18s                  node-controller  Node ha-450500-m03 event: Registered Node ha-450500-m03 in Controller
	  Normal  RegisteredNode           2m7s                   node-controller  Node ha-450500-m03 event: Registered Node ha-450500-m03 in Controller
	
	
	==> dmesg <==
	[  +1.962507] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +7.261044] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Mar17 11:07] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +0.170471] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[ +31.523484] systemd-fstab-generator[1013]: Ignoring "noauto" option for root device
	[  +0.111714] kauditd_printk_skb: 69 callbacks suppressed
	[  +0.601464] systemd-fstab-generator[1052]: Ignoring "noauto" option for root device
	[  +0.193596] systemd-fstab-generator[1064]: Ignoring "noauto" option for root device
	[  +0.213073] systemd-fstab-generator[1078]: Ignoring "noauto" option for root device
	[  +2.889581] systemd-fstab-generator[1301]: Ignoring "noauto" option for root device
	[  +0.200551] systemd-fstab-generator[1313]: Ignoring "noauto" option for root device
	[  +0.197341] systemd-fstab-generator[1325]: Ignoring "noauto" option for root device
	[  +0.256376] systemd-fstab-generator[1340]: Ignoring "noauto" option for root device
	[Mar17 11:08] systemd-fstab-generator[1438]: Ignoring "noauto" option for root device
	[  +0.105755] kauditd_printk_skb: 206 callbacks suppressed
	[  +3.694607] systemd-fstab-generator[1708]: Ignoring "noauto" option for root device
	[  +6.788709] systemd-fstab-generator[1859]: Ignoring "noauto" option for root device
	[  +0.105591] kauditd_printk_skb: 74 callbacks suppressed
	[  +6.505315] kauditd_printk_skb: 67 callbacks suppressed
	[  +3.579854] systemd-fstab-generator[2388]: Ignoring "noauto" option for root device
	[  +5.997758] kauditd_printk_skb: 17 callbacks suppressed
	[  +8.240667] kauditd_printk_skb: 29 callbacks suppressed
	[Mar17 11:12] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [b11cf03bfdb6] <==
	{"level":"info","ts":"2025-03-17T11:16:06.804114Z","caller":"traceutil/trace.go:171","msg":"trace[1111555695] transaction","detail":"{read_only:false; response_revision:1478; number_of_response:1; }","duration":"115.446826ms","start":"2025-03-17T11:16:06.688617Z","end":"2025-03-17T11:16:06.804064Z","steps":["trace[1111555695] 'process raft request'  (duration: 86.545519ms)","trace[1111555695] 'compare'  (duration: 28.663106ms)"],"step_count":2}
	{"level":"info","ts":"2025-03-17T11:16:06.804884Z","caller":"traceutil/trace.go:171","msg":"trace[866365495] transaction","detail":"{read_only:false; response_revision:1479; number_of_response:1; }","duration":"110.232789ms","start":"2025-03-17T11:16:06.694639Z","end":"2025-03-17T11:16:06.804872Z","steps":["trace[866365495] 'process raft request'  (duration: 109.284082ms)"],"step_count":1}
	{"level":"info","ts":"2025-03-17T11:16:07.088592Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"329bf8d2aaab106a","to":"f6d801ec72a79dd","stream-type":"stream Message"}
	{"level":"info","ts":"2025-03-17T11:16:07.089020Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"f6d801ec72a79dd"}
	{"level":"info","ts":"2025-03-17T11:16:07.089358Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"329bf8d2aaab106a","remote-peer-id":"f6d801ec72a79dd"}
	{"level":"info","ts":"2025-03-17T11:16:07.096404Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"329bf8d2aaab106a","to":"f6d801ec72a79dd","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-03-17T11:16:07.096488Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"329bf8d2aaab106a","remote-peer-id":"f6d801ec72a79dd"}
	{"level":"info","ts":"2025-03-17T11:16:07.122819Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"329bf8d2aaab106a","remote-peer-id":"f6d801ec72a79dd"}
	{"level":"info","ts":"2025-03-17T11:16:07.124173Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"329bf8d2aaab106a","remote-peer-id":"f6d801ec72a79dd"}
	{"level":"warn","ts":"2025-03-17T11:16:07.639292Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"f6d801ec72a79dd","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2025-03-17T11:16:08.639573Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"f6d801ec72a79dd","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2025-03-17T11:16:08.731254Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"135.53357ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-03-17T11:16:08.731416Z","caller":"traceutil/trace.go:171","msg":"trace[315181519] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1511; }","duration":"135.707171ms","start":"2025-03-17T11:16:08.595695Z","end":"2025-03-17T11:16:08.731402Z","steps":["trace[315181519] 'range keys from in-memory index tree'  (duration: 135.52107ms)"],"step_count":1}
	{"level":"info","ts":"2025-03-17T11:16:09.645552Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"329bf8d2aaab106a switched to configuration voters=(1111685552709204445 2321470703431241016 3646781906976706666)"}
	{"level":"info","ts":"2025-03-17T11:16:09.645868Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"c1d0efeade47d0da","local-member-id":"329bf8d2aaab106a"}
	{"level":"info","ts":"2025-03-17T11:16:09.645904Z","caller":"etcdserver/server.go:2018","msg":"applied a configuration change through raft","local-member-id":"329bf8d2aaab106a","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"f6d801ec72a79dd"}
	{"level":"info","ts":"2025-03-17T11:16:14.488895Z","caller":"traceutil/trace.go:171","msg":"trace[1134422171] transaction","detail":"{read_only:false; response_revision:1535; number_of_response:1; }","duration":"133.398754ms","start":"2025-03-17T11:16:14.355464Z","end":"2025-03-17T11:16:14.488863Z","steps":["trace[1134422171] 'process raft request'  (duration: 133.282253ms)"],"step_count":1}
	{"level":"info","ts":"2025-03-17T11:16:15.793938Z","caller":"traceutil/trace.go:171","msg":"trace[1949528539] transaction","detail":"{read_only:false; response_revision:1542; number_of_response:1; }","duration":"236.15769ms","start":"2025-03-17T11:16:15.557765Z","end":"2025-03-17T11:16:15.793922Z","steps":["trace[1949528539] 'process raft request'  (duration: 236.030889ms)"],"step_count":1}
	{"level":"warn","ts":"2025-03-17T11:16:15.793732Z","caller":"etcdserver/raft.go:426","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"2037854e1a7ecd38","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"46.519663ms"}
	{"level":"warn","ts":"2025-03-17T11:16:15.795246Z","caller":"etcdserver/raft.go:426","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"f6d801ec72a79dd","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"48.036574ms"}
	{"level":"info","ts":"2025-03-17T11:18:20.019518Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1051}
	{"level":"warn","ts":"2025-03-17T11:18:20.252820Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"156.511304ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1182922382844312569 > lease_revoke:<id:106a95a3ca4b3b99>","response":"size:28"}
	{"level":"info","ts":"2025-03-17T11:18:20.267251Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1051,"took":"247.126506ms","hash":2839961366,"current-db-size-bytes":3719168,"current-db-size":"3.7 MB","current-db-size-in-use-bytes":2150400,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2025-03-17T11:18:20.267303Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":2839961366,"revision":1051,"compact-revision":-1}
	{"level":"info","ts":"2025-03-17T11:18:20.268285Z","caller":"traceutil/trace.go:171","msg":"trace[700552296] transaction","detail":"{read_only:false; response_revision:1980; number_of_response:1; }","duration":"166.315424ms","start":"2025-03-17T11:18:20.101952Z","end":"2025-03-17T11:18:20.268268Z","steps":["trace[700552296] 'process raft request'  (duration: 165.787017ms)"],"step_count":1}
	
	
	==> kernel <==
	 11:18:23 up 12 min,  0 users,  load average: 0.49, 0.55, 0.33
	Linux ha-450500 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [f00705dba2c6] <==
	I0317 11:17:40.814182       1 main.go:324] Node ha-450500-m03 has CIDR [10.244.2.0/24] 
	I0317 11:17:50.813279       1 main.go:297] Handling node with IPs: map[172.25.21.189:{}]
	I0317 11:17:50.815230       1 main.go:324] Node ha-450500-m02 has CIDR [10.244.1.0/24] 
	I0317 11:17:50.815733       1 main.go:297] Handling node with IPs: map[172.25.19.102:{}]
	I0317 11:17:50.815908       1 main.go:324] Node ha-450500-m03 has CIDR [10.244.2.0/24] 
	I0317 11:17:50.816421       1 main.go:297] Handling node with IPs: map[172.25.16.34:{}]
	I0317 11:17:50.816453       1 main.go:301] handling current node
	I0317 11:18:00.814367       1 main.go:297] Handling node with IPs: map[172.25.16.34:{}]
	I0317 11:18:00.814466       1 main.go:301] handling current node
	I0317 11:18:00.814486       1 main.go:297] Handling node with IPs: map[172.25.21.189:{}]
	I0317 11:18:00.814494       1 main.go:324] Node ha-450500-m02 has CIDR [10.244.1.0/24] 
	I0317 11:18:00.815155       1 main.go:297] Handling node with IPs: map[172.25.19.102:{}]
	I0317 11:18:00.815250       1 main.go:324] Node ha-450500-m03 has CIDR [10.244.2.0/24] 
	I0317 11:18:10.820933       1 main.go:297] Handling node with IPs: map[172.25.16.34:{}]
	I0317 11:18:10.821088       1 main.go:301] handling current node
	I0317 11:18:10.821107       1 main.go:297] Handling node with IPs: map[172.25.21.189:{}]
	I0317 11:18:10.821115       1 main.go:324] Node ha-450500-m02 has CIDR [10.244.1.0/24] 
	I0317 11:18:10.821549       1 main.go:297] Handling node with IPs: map[172.25.19.102:{}]
	I0317 11:18:10.821565       1 main.go:324] Node ha-450500-m03 has CIDR [10.244.2.0/24] 
	I0317 11:18:20.813175       1 main.go:297] Handling node with IPs: map[172.25.16.34:{}]
	I0317 11:18:20.813276       1 main.go:301] handling current node
	I0317 11:18:20.813297       1 main.go:297] Handling node with IPs: map[172.25.21.189:{}]
	I0317 11:18:20.813304       1 main.go:324] Node ha-450500-m02 has CIDR [10.244.1.0/24] 
	I0317 11:18:20.814257       1 main.go:297] Handling node with IPs: map[172.25.19.102:{}]
	I0317 11:18:20.814277       1 main.go:324] Node ha-450500-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [b3f198d2c66e] <==
	I0317 11:08:25.903393       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0317 11:08:25.949194       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0317 11:08:25.978408       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0317 11:08:30.491739       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0317 11:08:30.678800       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0317 11:16:03.540468       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 10.7µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E0317 11:16:03.540828       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0317 11:16:03.542379       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0317 11:16:03.543836       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0317 11:16:03.567276       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="44.528619ms" method="PATCH" path="/api/v1/namespaces/default/events/ha-450500-m03.182d92e98a85f536" result=null
	E0317 11:17:24.192109       1 conn.go:339] Error on socket receive: read tcp 172.25.31.254:8443->172.25.16.1:54353: use of closed network connection
	E0317 11:17:24.757625       1 conn.go:339] Error on socket receive: read tcp 172.25.31.254:8443->172.25.16.1:54355: use of closed network connection
	E0317 11:17:25.368602       1 conn.go:339] Error on socket receive: read tcp 172.25.31.254:8443->172.25.16.1:54357: use of closed network connection
	E0317 11:17:25.932074       1 conn.go:339] Error on socket receive: read tcp 172.25.31.254:8443->172.25.16.1:54359: use of closed network connection
	E0317 11:17:26.467799       1 conn.go:339] Error on socket receive: read tcp 172.25.31.254:8443->172.25.16.1:54361: use of closed network connection
	E0317 11:17:27.098210       1 conn.go:339] Error on socket receive: read tcp 172.25.31.254:8443->172.25.16.1:54363: use of closed network connection
	E0317 11:17:27.607909       1 conn.go:339] Error on socket receive: read tcp 172.25.31.254:8443->172.25.16.1:54365: use of closed network connection
	E0317 11:17:28.103624       1 conn.go:339] Error on socket receive: read tcp 172.25.31.254:8443->172.25.16.1:54367: use of closed network connection
	E0317 11:17:28.626585       1 conn.go:339] Error on socket receive: read tcp 172.25.31.254:8443->172.25.16.1:54369: use of closed network connection
	E0317 11:17:29.533713       1 conn.go:339] Error on socket receive: read tcp 172.25.31.254:8443->172.25.16.1:54372: use of closed network connection
	E0317 11:17:40.041383       1 conn.go:339] Error on socket receive: read tcp 172.25.31.254:8443->172.25.16.1:54374: use of closed network connection
	E0317 11:17:40.557259       1 conn.go:339] Error on socket receive: read tcp 172.25.31.254:8443->172.25.16.1:54377: use of closed network connection
	E0317 11:17:51.028903       1 conn.go:339] Error on socket receive: read tcp 172.25.31.254:8443->172.25.16.1:54379: use of closed network connection
	E0317 11:17:51.538092       1 conn.go:339] Error on socket receive: read tcp 172.25.31.254:8443->172.25.16.1:54382: use of closed network connection
	E0317 11:18:02.030226       1 conn.go:339] Error on socket receive: read tcp 172.25.31.254:8443->172.25.16.1:54384: use of closed network connection
	
	
	==> kube-controller-manager [c94d28127c40] <==
	I0317 11:16:11.316264       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-450500-m03"
	I0317 11:16:13.214492       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-450500-m03"
	I0317 11:16:16.549862       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-450500-m03"
	I0317 11:16:16.659512       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-450500-m03"
	I0317 11:16:31.759469       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-450500-m03"
	I0317 11:16:31.787135       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-450500-m03"
	I0317 11:16:31.951207       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-450500-m02"
	I0317 11:16:33.370515       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-450500-m03"
	I0317 11:16:34.022912       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-450500-m03"
	I0317 11:17:06.495895       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-450500"
	I0317 11:17:15.836750       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="173.65594ms"
	I0317 11:17:16.245374       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="408.320815ms"
	I0317 11:17:16.598384       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="352.781019ms"
	I0317 11:17:16.631393       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="32.161129ms"
	I0317 11:17:16.632548       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="90.501µs"
	I0317 11:17:16.806742       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="54µs"
	I0317 11:17:19.733408       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="39.256198ms"
	I0317 11:17:19.734102       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="150.002µs"
	I0317 11:17:20.154062       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="68.443166ms"
	I0317 11:17:20.155347       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="187.003µs"
	I0317 11:17:21.396090       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="24.54751ms"
	I0317 11:17:21.398370       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="96.401µs"
	I0317 11:17:26.972889       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-450500"
	I0317 11:17:34.601150       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-450500-m03"
	I0317 11:17:43.662528       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-450500-m02"
	
	
	==> kube-proxy [fe97a5e85c40] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0317 11:08:32.506830       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0317 11:08:32.553358       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["172.25.16.34"]
	E0317 11:08:32.554355       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0317 11:08:32.632280       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0317 11:08:32.632439       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0317 11:08:32.632491       1 server_linux.go:170] "Using iptables Proxier"
	I0317 11:08:32.638138       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0317 11:08:32.641166       1 server.go:497] "Version info" version="v1.32.2"
	I0317 11:08:32.641324       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0317 11:08:32.647838       1 config.go:329] "Starting node config controller"
	I0317 11:08:32.649156       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0317 11:08:32.651526       1 config.go:199] "Starting service config controller"
	I0317 11:08:32.651752       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0317 11:08:32.651928       1 config.go:105] "Starting endpoint slice config controller"
	I0317 11:08:32.652176       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0317 11:08:32.750078       1 shared_informer.go:320] Caches are synced for node config
	I0317 11:08:32.753359       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0317 11:08:32.753395       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [42fa7c58af32] <==
	W0317 11:08:23.446641       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0317 11:08:23.446672       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 11:08:23.511288       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0317 11:08:23.512225       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 11:08:23.512418       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0317 11:08:23.512471       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0317 11:08:23.530485       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0317 11:08:23.530607       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0317 11:08:23.578418       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0317 11:08:23.580087       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0317 11:08:23.634088       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0317 11:08:23.634386       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0317 11:08:23.816337       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0317 11:08:23.817157       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 11:08:23.823127       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0317 11:08:23.823421       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 11:08:23.845289       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0317 11:08:23.845337       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 11:08:23.933132       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0317 11:08:23.933400       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 11:08:23.977047       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0317 11:08:23.977261       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 11:08:24.004722       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0317 11:08:24.004765       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0317 11:08:25.923446       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 17 11:13:26 ha-450500 kubelet[2395]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 17 11:13:26 ha-450500 kubelet[2395]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 17 11:13:26 ha-450500 kubelet[2395]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 17 11:13:26 ha-450500 kubelet[2395]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 17 11:14:26 ha-450500 kubelet[2395]: E0317 11:14:26.155679    2395 iptables.go:577] "Could not set up iptables canary" err=<
	Mar 17 11:14:26 ha-450500 kubelet[2395]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 17 11:14:26 ha-450500 kubelet[2395]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 17 11:14:26 ha-450500 kubelet[2395]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 17 11:14:26 ha-450500 kubelet[2395]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 17 11:15:26 ha-450500 kubelet[2395]: E0317 11:15:26.156602    2395 iptables.go:577] "Could not set up iptables canary" err=<
	Mar 17 11:15:26 ha-450500 kubelet[2395]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 17 11:15:26 ha-450500 kubelet[2395]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 17 11:15:26 ha-450500 kubelet[2395]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 17 11:15:26 ha-450500 kubelet[2395]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 17 11:16:26 ha-450500 kubelet[2395]: E0317 11:16:26.156839    2395 iptables.go:577] "Could not set up iptables canary" err=<
	Mar 17 11:16:26 ha-450500 kubelet[2395]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 17 11:16:26 ha-450500 kubelet[2395]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 17 11:16:26 ha-450500 kubelet[2395]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 17 11:16:26 ha-450500 kubelet[2395]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 17 11:17:15 ha-450500 kubelet[2395]: I0317 11:17:15.894814    2395 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfz5l\" (UniqueName: \"kubernetes.io/projected/b82bc2d2-c9d6-493f-8b1e-31a15dea1ccb-kube-api-access-bfz5l\") pod \"busybox-58667487b6-w6ngz\" (UID: \"b82bc2d2-c9d6-493f-8b1e-31a15dea1ccb\") " pod="default/busybox-58667487b6-w6ngz"
	Mar 17 11:17:26 ha-450500 kubelet[2395]: E0317 11:17:26.167733    2395 iptables.go:577] "Could not set up iptables canary" err=<
	Mar 17 11:17:26 ha-450500 kubelet[2395]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 17 11:17:26 ha-450500 kubelet[2395]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 17 11:17:26 ha-450500 kubelet[2395]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 17 11:17:26 ha-450500 kubelet[2395]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-450500 -n ha-450500
E0317 11:18:37.827739    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-331000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-450500 -n ha-450500: (12.6635134s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-450500 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (69.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (65.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-450500 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-450500 node stop m02 -v=7 --alsologtostderr: exit status 1 (28.9088448s)

                                                
                                                
-- stdout --
	* Stopping node "ha-450500-m02"  ...
	* Powering off "ha-450500-m02" via SSH ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0317 11:34:47.752158    3300 out.go:345] Setting OutFile to fd 1200 ...
	I0317 11:34:47.851663    3300 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 11:34:47.851663    3300 out.go:358] Setting ErrFile to fd 932...
	I0317 11:34:47.851787    3300 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 11:34:47.866510    3300 mustload.go:65] Loading cluster: ha-450500
	I0317 11:34:47.868274    3300 config.go:182] Loaded profile config "ha-450500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0317 11:34:47.868274    3300 stop.go:39] StopHost: ha-450500-m02
	I0317 11:34:47.874280    3300 out.go:177] * Stopping node "ha-450500-m02"  ...
	I0317 11:34:47.877895    3300 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0317 11:34:47.889332    3300 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0317 11:34:47.889912    3300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m02 ).state
	I0317 11:34:50.138774    3300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:34:50.138836    3300 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:34:50.138836    3300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 11:34:52.860759    3300 main.go:141] libmachine: [stdout =====>] : 172.25.21.189
	
	I0317 11:34:52.860759    3300 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:34:52.861083    3300 sshutil.go:53] new ssh client: &{IP:172.25.21.189 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500-m02\id_rsa Username:docker}
	I0317 11:34:52.987162    3300 ssh_runner.go:235] Completed: sudo mkdir -p /var/lib/minikube/backup: (5.0977855s)
	I0317 11:34:53.004042    3300 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0317 11:34:53.090491    3300 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0317 11:34:53.157098    3300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m02 ).state
	I0317 11:34:55.384069    3300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:34:55.384069    3300 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:34:55.386817    3300 out.go:177] * Powering off "ha-450500-m02" via SSH ...
	I0317 11:34:55.390034    3300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m02 ).state
	I0317 11:34:57.621588    3300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:34:57.621588    3300 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:34:57.621588    3300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 11:35:00.255902    3300 main.go:141] libmachine: [stdout =====>] : 172.25.21.189
	
	I0317 11:35:00.255902    3300 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:35:00.262641    3300 main.go:141] libmachine: Using SSH client type: native
	I0317 11:35:00.263368    3300 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.21.189 22 <nil> <nil>}
	I0317 11:35:00.263368    3300 main.go:141] libmachine: About to run SSH command:
	sudo poweroff
	I0317 11:35:00.432295    3300 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0317 11:35:00.432295    3300 stop.go:100] poweroff result: out=, err=<nil>
	I0317 11:35:00.432295    3300 main.go:141] libmachine: Stopping "ha-450500-m02"...
	I0317 11:35:00.432962    3300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m02 ).state
	I0317 11:35:03.417329    3300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:35:03.417555    3300 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:35:03.417712    3300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Stop-VM ha-450500-m02

                                                
                                                
** /stderr **
ha_test.go:367: secondary control-plane node stop returned an error. args "out/minikube-windows-amd64.exe -p ha-450500 node stop m02 -v=7 --alsologtostderr": exit status 1
ha_test.go:371: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-450500 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-450500 status -v=7 --alsologtostderr: context deadline exceeded (101.6µs)
ha_test.go:374: failed to run minikube status. args "out/minikube-windows-amd64.exe -p ha-450500 status -v=7 --alsologtostderr" : context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-450500 -n ha-450500
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-450500 -n ha-450500: (13.0936114s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-450500 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-450500 logs -n 25: (9.4127732s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                            |  Profile  |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| cp      | ha-450500 cp ha-450500-m03:/home/docker/cp-test.txt                                                                       | ha-450500 | minikube6\jenkins | v1.35.0 | 17 Mar 25 11:30 UTC | 17 Mar 25 11:30 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile1823244443\001\cp-test_ha-450500-m03.txt |           |                   |         |                     |                     |
	| ssh     | ha-450500 ssh -n                                                                                                          | ha-450500 | minikube6\jenkins | v1.35.0 | 17 Mar 25 11:30 UTC | 17 Mar 25 11:30 UTC |
	|         | ha-450500-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-450500 cp ha-450500-m03:/home/docker/cp-test.txt                                                                       | ha-450500 | minikube6\jenkins | v1.35.0 | 17 Mar 25 11:30 UTC | 17 Mar 25 11:30 UTC |
	|         | ha-450500:/home/docker/cp-test_ha-450500-m03_ha-450500.txt                                                                |           |                   |         |                     |                     |
	| ssh     | ha-450500 ssh -n                                                                                                          | ha-450500 | minikube6\jenkins | v1.35.0 | 17 Mar 25 11:30 UTC | 17 Mar 25 11:30 UTC |
	|         | ha-450500-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-450500 ssh -n ha-450500 sudo cat                                                                                       | ha-450500 | minikube6\jenkins | v1.35.0 | 17 Mar 25 11:30 UTC | 17 Mar 25 11:31 UTC |
	|         | /home/docker/cp-test_ha-450500-m03_ha-450500.txt                                                                          |           |                   |         |                     |                     |
	| cp      | ha-450500 cp ha-450500-m03:/home/docker/cp-test.txt                                                                       | ha-450500 | minikube6\jenkins | v1.35.0 | 17 Mar 25 11:31 UTC | 17 Mar 25 11:31 UTC |
	|         | ha-450500-m02:/home/docker/cp-test_ha-450500-m03_ha-450500-m02.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-450500 ssh -n                                                                                                          | ha-450500 | minikube6\jenkins | v1.35.0 | 17 Mar 25 11:31 UTC | 17 Mar 25 11:31 UTC |
	|         | ha-450500-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-450500 ssh -n ha-450500-m02 sudo cat                                                                                   | ha-450500 | minikube6\jenkins | v1.35.0 | 17 Mar 25 11:31 UTC | 17 Mar 25 11:31 UTC |
	|         | /home/docker/cp-test_ha-450500-m03_ha-450500-m02.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-450500 cp ha-450500-m03:/home/docker/cp-test.txt                                                                       | ha-450500 | minikube6\jenkins | v1.35.0 | 17 Mar 25 11:31 UTC | 17 Mar 25 11:31 UTC |
	|         | ha-450500-m04:/home/docker/cp-test_ha-450500-m03_ha-450500-m04.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-450500 ssh -n                                                                                                          | ha-450500 | minikube6\jenkins | v1.35.0 | 17 Mar 25 11:31 UTC | 17 Mar 25 11:32 UTC |
	|         | ha-450500-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-450500 ssh -n ha-450500-m04 sudo cat                                                                                   | ha-450500 | minikube6\jenkins | v1.35.0 | 17 Mar 25 11:32 UTC | 17 Mar 25 11:32 UTC |
	|         | /home/docker/cp-test_ha-450500-m03_ha-450500-m04.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-450500 cp testdata\cp-test.txt                                                                                         | ha-450500 | minikube6\jenkins | v1.35.0 | 17 Mar 25 11:32 UTC | 17 Mar 25 11:32 UTC |
	|         | ha-450500-m04:/home/docker/cp-test.txt                                                                                    |           |                   |         |                     |                     |
	| ssh     | ha-450500 ssh -n                                                                                                          | ha-450500 | minikube6\jenkins | v1.35.0 | 17 Mar 25 11:32 UTC | 17 Mar 25 11:32 UTC |
	|         | ha-450500-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-450500 cp ha-450500-m04:/home/docker/cp-test.txt                                                                       | ha-450500 | minikube6\jenkins | v1.35.0 | 17 Mar 25 11:32 UTC | 17 Mar 25 11:32 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile1823244443\001\cp-test_ha-450500-m04.txt |           |                   |         |                     |                     |
	| ssh     | ha-450500 ssh -n                                                                                                          | ha-450500 | minikube6\jenkins | v1.35.0 | 17 Mar 25 11:32 UTC | 17 Mar 25 11:32 UTC |
	|         | ha-450500-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-450500 cp ha-450500-m04:/home/docker/cp-test.txt                                                                       | ha-450500 | minikube6\jenkins | v1.35.0 | 17 Mar 25 11:32 UTC | 17 Mar 25 11:33 UTC |
	|         | ha-450500:/home/docker/cp-test_ha-450500-m04_ha-450500.txt                                                                |           |                   |         |                     |                     |
	| ssh     | ha-450500 ssh -n                                                                                                          | ha-450500 | minikube6\jenkins | v1.35.0 | 17 Mar 25 11:33 UTC | 17 Mar 25 11:33 UTC |
	|         | ha-450500-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-450500 ssh -n ha-450500 sudo cat                                                                                       | ha-450500 | minikube6\jenkins | v1.35.0 | 17 Mar 25 11:33 UTC | 17 Mar 25 11:33 UTC |
	|         | /home/docker/cp-test_ha-450500-m04_ha-450500.txt                                                                          |           |                   |         |                     |                     |
	| cp      | ha-450500 cp ha-450500-m04:/home/docker/cp-test.txt                                                                       | ha-450500 | minikube6\jenkins | v1.35.0 | 17 Mar 25 11:33 UTC | 17 Mar 25 11:33 UTC |
	|         | ha-450500-m02:/home/docker/cp-test_ha-450500-m04_ha-450500-m02.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-450500 ssh -n                                                                                                          | ha-450500 | minikube6\jenkins | v1.35.0 | 17 Mar 25 11:33 UTC | 17 Mar 25 11:34 UTC |
	|         | ha-450500-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-450500 ssh -n ha-450500-m02 sudo cat                                                                                   | ha-450500 | minikube6\jenkins | v1.35.0 | 17 Mar 25 11:34 UTC | 17 Mar 25 11:34 UTC |
	|         | /home/docker/cp-test_ha-450500-m04_ha-450500-m02.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-450500 cp ha-450500-m04:/home/docker/cp-test.txt                                                                       | ha-450500 | minikube6\jenkins | v1.35.0 | 17 Mar 25 11:34 UTC | 17 Mar 25 11:34 UTC |
	|         | ha-450500-m03:/home/docker/cp-test_ha-450500-m04_ha-450500-m03.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-450500 ssh -n                                                                                                          | ha-450500 | minikube6\jenkins | v1.35.0 | 17 Mar 25 11:34 UTC | 17 Mar 25 11:34 UTC |
	|         | ha-450500-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-450500 ssh -n ha-450500-m03 sudo cat                                                                                   | ha-450500 | minikube6\jenkins | v1.35.0 | 17 Mar 25 11:34 UTC | 17 Mar 25 11:34 UTC |
	|         | /home/docker/cp-test_ha-450500-m04_ha-450500-m03.txt                                                                      |           |                   |         |                     |                     |
	| node    | ha-450500 node stop m02 -v=7                                                                                              | ha-450500 | minikube6\jenkins | v1.35.0 | 17 Mar 25 11:34 UTC |                     |
	|         | --alsologtostderr                                                                                                         |           |                   |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/03/17 11:05:16
	Running on machine: minikube6
	Binary: Built with gc go1.24.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0317 11:05:16.652834    8508 out.go:345] Setting OutFile to fd 1296 ...
	I0317 11:05:16.727290    8508 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 11:05:16.727290    8508 out.go:358] Setting ErrFile to fd 1704...
	I0317 11:05:16.727290    8508 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 11:05:16.746123    8508 out.go:352] Setting JSON to false
	I0317 11:05:16.750127    8508 start.go:129] hostinfo: {"hostname":"minikube6","uptime":3293,"bootTime":1742206223,"procs":178,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5608 Build 19045.5608","kernelVersion":"10.0.19045.5608 Build 19045.5608","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0317 11:05:16.750127    8508 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0317 11:05:16.757118    8508 out.go:177] * [ha-450500] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5608 Build 19045.5608
	I0317 11:05:16.761124    8508 notify.go:220] Checking for updates...
	I0317 11:05:16.764126    8508 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0317 11:05:16.766135    8508 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0317 11:05:16.769121    8508 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0317 11:05:16.772120    8508 out.go:177]   - MINIKUBE_LOCATION=20535
	I0317 11:05:16.775115    8508 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0317 11:05:16.778128    8508 driver.go:394] Setting default libvirt URI to qemu:///system
	I0317 11:05:22.209480    8508 out.go:177] * Using the hyperv driver based on user configuration
	I0317 11:05:22.213793    8508 start.go:297] selected driver: hyperv
	I0317 11:05:22.213793    8508 start.go:901] validating driver "hyperv" against <nil>
	I0317 11:05:22.213793    8508 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0317 11:05:22.263169    8508 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0317 11:05:22.264743    8508 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0317 11:05:22.264743    8508 cni.go:84] Creating CNI manager for ""
	I0317 11:05:22.264743    8508 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0317 11:05:22.264743    8508 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0317 11:05:22.264743    8508 start.go:340] cluster config:
	{Name:ha-450500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-450500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 11:05:22.265671    8508 iso.go:125] acquiring lock: {Name:mk49322cc4182124f5e9cd1631076166a7ff463d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0317 11:05:22.271349    8508 out.go:177] * Starting "ha-450500" primary control-plane node in "ha-450500" cluster
	I0317 11:05:22.274188    8508 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0317 11:05:22.274424    8508 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4
	I0317 11:05:22.274499    8508 cache.go:56] Caching tarball of preloaded images
	I0317 11:05:22.274899    8508 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0317 11:05:22.275061    8508 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on docker
	I0317 11:05:22.275663    8508 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\config.json ...
	I0317 11:05:22.275915    8508 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\config.json: {Name:mk6a4b7a1771fbbf998c27c763b172cd014033ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:05:22.277349    8508 start.go:360] acquireMachinesLock for ha-450500: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0317 11:05:22.277545    8508 start.go:364] duration metric: took 72.2µs to acquireMachinesLock for "ha-450500"
	I0317 11:05:22.277887    8508 start.go:93] Provisioning new machine with config: &{Name:ha-450500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName
:ha-450500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0317 11:05:22.277963    8508 start.go:125] createHost starting for "" (driver="hyperv")
	I0317 11:05:22.280394    8508 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0317 11:05:22.281178    8508 start.go:159] libmachine.API.Create for "ha-450500" (driver="hyperv")
	I0317 11:05:22.281279    8508 client.go:168] LocalClient.Create starting
	I0317 11:05:22.281306    8508 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0317 11:05:22.281876    8508 main.go:141] libmachine: Decoding PEM data...
	I0317 11:05:22.281876    8508 main.go:141] libmachine: Parsing certificate...
	I0317 11:05:22.281876    8508 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0317 11:05:22.281876    8508 main.go:141] libmachine: Decoding PEM data...
	I0317 11:05:22.281876    8508 main.go:141] libmachine: Parsing certificate...
	I0317 11:05:22.282427    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0317 11:05:24.406557    8508 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0317 11:05:24.406557    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:05:24.406796    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0317 11:05:26.158096    8508 main.go:141] libmachine: [stdout =====>] : False
	
	I0317 11:05:26.158096    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:05:26.159005    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0317 11:05:27.688410    8508 main.go:141] libmachine: [stdout =====>] : True
	
	I0317 11:05:27.688902    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:05:27.688974    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0317 11:05:31.389058    8508 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0317 11:05:31.389058    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:05:31.392877    8508 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0317 11:05:31.916010    8508 main.go:141] libmachine: Creating SSH key...
	I0317 11:05:32.140452    8508 main.go:141] libmachine: Creating VM...
	I0317 11:05:32.140452    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0317 11:05:35.042462    8508 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0317 11:05:35.042520    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:05:35.042520    8508 main.go:141] libmachine: Using switch "Default Switch"
	I0317 11:05:35.042520    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0317 11:05:36.861622    8508 main.go:141] libmachine: [stdout =====>] : True
	
	I0317 11:05:36.861622    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:05:36.861727    8508 main.go:141] libmachine: Creating VHD
	I0317 11:05:36.861822    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500\fixed.vhd' -SizeBytes 10MB -Fixed
	I0317 11:05:40.793443    8508 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 3A17D52E-98AD-4CD0-8637-F68C66327875
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0317 11:05:40.794423    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:05:40.794473    8508 main.go:141] libmachine: Writing magic tar header
	I0317 11:05:40.794557    8508 main.go:141] libmachine: Writing SSH key tar header
	I0317 11:05:40.808435    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500\disk.vhd' -VHDType Dynamic -DeleteSource
	I0317 11:05:44.045046    8508 main.go:141] libmachine: [stdout =====>] : 
	I0317 11:05:44.046099    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:05:44.046158    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500\disk.vhd' -SizeBytes 20000MB
	I0317 11:05:46.661572    8508 main.go:141] libmachine: [stdout =====>] : 
	I0317 11:05:46.661572    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:05:46.661572    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-450500 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0317 11:05:50.342966    8508 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-450500 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0317 11:05:50.343968    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:05:50.343968    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-450500 -DynamicMemoryEnabled $false
	I0317 11:05:52.638652    8508 main.go:141] libmachine: [stdout =====>] : 
	I0317 11:05:52.638652    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:05:52.639540    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-450500 -Count 2
	I0317 11:05:54.910267    8508 main.go:141] libmachine: [stdout =====>] : 
	I0317 11:05:54.911285    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:05:54.911329    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-450500 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500\boot2docker.iso'
	I0317 11:05:57.584529    8508 main.go:141] libmachine: [stdout =====>] : 
	I0317 11:05:57.585010    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:05:57.585060    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-450500 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500\disk.vhd'
	I0317 11:06:00.282011    8508 main.go:141] libmachine: [stdout =====>] : 
	I0317 11:06:00.282011    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:06:00.282011    8508 main.go:141] libmachine: Starting VM...
	I0317 11:06:00.282011    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-450500
	I0317 11:06:03.470563    8508 main.go:141] libmachine: [stdout =====>] : 
	I0317 11:06:03.470765    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:06:03.470765    8508 main.go:141] libmachine: Waiting for host to start...
	I0317 11:06:03.470876    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500 ).state
	I0317 11:06:05.752314    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:06:05.753279    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:06:05.753347    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500 ).networkadapters[0]).ipaddresses[0]
	I0317 11:06:08.305190    8508 main.go:141] libmachine: [stdout =====>] : 
	I0317 11:06:08.305190    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:06:09.305483    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500 ).state
	I0317 11:06:11.540380    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:06:11.540380    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:06:11.540774    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500 ).networkadapters[0]).ipaddresses[0]
	I0317 11:06:14.102946    8508 main.go:141] libmachine: [stdout =====>] : 
	I0317 11:06:14.103211    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:06:15.104620    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500 ).state
	I0317 11:06:17.358582    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:06:17.358684    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:06:17.358752    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500 ).networkadapters[0]).ipaddresses[0]
	I0317 11:06:19.889431    8508 main.go:141] libmachine: [stdout =====>] : 
	I0317 11:06:19.890106    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:06:20.890785    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500 ).state
	I0317 11:06:23.132030    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:06:23.132327    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:06:23.132522    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500 ).networkadapters[0]).ipaddresses[0]
	I0317 11:06:25.659276    8508 main.go:141] libmachine: [stdout =====>] : 
	I0317 11:06:25.659944    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:06:26.661003    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500 ).state
	I0317 11:06:28.918666    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:06:28.918666    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:06:28.919215    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500 ).networkadapters[0]).ipaddresses[0]
	I0317 11:06:31.548433    8508 main.go:141] libmachine: [stdout =====>] : 172.25.16.34
	
	I0317 11:06:31.548433    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:06:31.549138    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500 ).state
	I0317 11:06:33.689233    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:06:33.689822    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:06:33.689866    8508 machine.go:93] provisionDockerMachine start ...
	I0317 11:06:33.690002    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500 ).state
	I0317 11:06:35.862241    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:06:35.862241    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:06:35.862342    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500 ).networkadapters[0]).ipaddresses[0]
	I0317 11:06:38.438530    8508 main.go:141] libmachine: [stdout =====>] : 172.25.16.34
	
	I0317 11:06:38.438530    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:06:38.444781    8508 main.go:141] libmachine: Using SSH client type: native
	I0317 11:06:38.459326    8508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.16.34 22 <nil> <nil>}
	I0317 11:06:38.459326    8508 main.go:141] libmachine: About to run SSH command:
	hostname
	I0317 11:06:38.602091    8508 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0317 11:06:38.602091    8508 buildroot.go:166] provisioning hostname "ha-450500"
	I0317 11:06:38.602091    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500 ).state
	I0317 11:06:40.724125    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:06:40.724125    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:06:40.724408    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500 ).networkadapters[0]).ipaddresses[0]
	I0317 11:06:43.272848    8508 main.go:141] libmachine: [stdout =====>] : 172.25.16.34
	
	I0317 11:06:43.273265    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:06:43.280099    8508 main.go:141] libmachine: Using SSH client type: native
	I0317 11:06:43.281056    8508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.16.34 22 <nil> <nil>}
	I0317 11:06:43.281056    8508 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-450500 && echo "ha-450500" | sudo tee /etc/hostname
	I0317 11:06:43.447356    8508 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-450500
	
	I0317 11:06:43.447356    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500 ).state
	I0317 11:06:45.558133    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:06:45.558671    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:06:45.558783    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500 ).networkadapters[0]).ipaddresses[0]
	I0317 11:06:48.047202    8508 main.go:141] libmachine: [stdout =====>] : 172.25.16.34
	
	I0317 11:06:48.047202    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:06:48.053645    8508 main.go:141] libmachine: Using SSH client type: native
	I0317 11:06:48.054245    8508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.16.34 22 <nil> <nil>}
	I0317 11:06:48.054801    8508 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-450500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-450500/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-450500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0317 11:06:48.208085    8508 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0317 11:06:48.208085    8508 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0317 11:06:48.208085    8508 buildroot.go:174] setting up certificates
	I0317 11:06:48.208085    8508 provision.go:84] configureAuth start
	I0317 11:06:48.208085    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500 ).state
	I0317 11:06:50.326279    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:06:50.326875    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:06:50.326875    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500 ).networkadapters[0]).ipaddresses[0]
	I0317 11:06:52.836115    8508 main.go:141] libmachine: [stdout =====>] : 172.25.16.34
	
	I0317 11:06:52.836115    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:06:52.837225    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500 ).state
	I0317 11:06:54.934775    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:06:54.934775    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:06:54.935139    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500 ).networkadapters[0]).ipaddresses[0]
	I0317 11:06:57.464637    8508 main.go:141] libmachine: [stdout =====>] : 172.25.16.34
	
	I0317 11:06:57.465057    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:06:57.465057    8508 provision.go:143] copyHostCerts
	I0317 11:06:57.465057    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0317 11:06:57.465057    8508 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0317 11:06:57.465647    8508 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0317 11:06:57.465874    8508 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0317 11:06:57.467642    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0317 11:06:57.467642    8508 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0317 11:06:57.467642    8508 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0317 11:06:57.468441    8508 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0317 11:06:57.469741    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0317 11:06:57.469895    8508 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0317 11:06:57.469895    8508 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0317 11:06:57.470604    8508 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0317 11:06:57.471342    8508 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-450500 san=[127.0.0.1 172.25.16.34 ha-450500 localhost minikube]
	I0317 11:06:57.574873    8508 provision.go:177] copyRemoteCerts
	I0317 11:06:57.587807    8508 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0317 11:06:57.588248    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500 ).state
	I0317 11:06:59.669944    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:06:59.670600    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:06:59.670656    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500 ).networkadapters[0]).ipaddresses[0]
	I0317 11:07:02.211738    8508 main.go:141] libmachine: [stdout =====>] : 172.25.16.34
	
	I0317 11:07:02.211738    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:07:02.212959    8508 sshutil.go:53] new ssh client: &{IP:172.25.16.34 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500\id_rsa Username:docker}
	I0317 11:07:02.322425    8508 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7345841s)
	I0317 11:07:02.322425    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0317 11:07:02.323040    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0317 11:07:02.372108    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0317 11:07:02.372652    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0317 11:07:02.424286    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0317 11:07:02.424592    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0317 11:07:02.469019    8508 provision.go:87] duration metric: took 14.2598293s to configureAuth
	I0317 11:07:02.469019    8508 buildroot.go:189] setting minikube options for container-runtime
	I0317 11:07:02.469019    8508 config.go:182] Loaded profile config "ha-450500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0317 11:07:02.469019    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500 ).state
	I0317 11:07:04.598319    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:07:04.598356    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:07:04.598441    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500 ).networkadapters[0]).ipaddresses[0]
	I0317 11:07:07.174176    8508 main.go:141] libmachine: [stdout =====>] : 172.25.16.34
	
	I0317 11:07:07.174228    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:07:07.180603    8508 main.go:141] libmachine: Using SSH client type: native
	I0317 11:07:07.181130    8508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.16.34 22 <nil> <nil>}
	I0317 11:07:07.181228    8508 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0317 11:07:07.319818    8508 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0317 11:07:07.319818    8508 buildroot.go:70] root file system type: tmpfs
	I0317 11:07:07.320138    8508 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0317 11:07:07.320218    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500 ).state
	I0317 11:07:09.447571    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:07:09.447802    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:07:09.447965    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500 ).networkadapters[0]).ipaddresses[0]
	I0317 11:07:11.977682    8508 main.go:141] libmachine: [stdout =====>] : 172.25.16.34
	
	I0317 11:07:11.977682    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:07:11.984772    8508 main.go:141] libmachine: Using SSH client type: native
	I0317 11:07:11.985456    8508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.16.34 22 <nil> <nil>}
	I0317 11:07:11.985456    8508 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0317 11:07:12.148479    8508 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0317 11:07:12.148479    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500 ).state
	I0317 11:07:14.242515    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:07:14.242923    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:07:14.243049    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500 ).networkadapters[0]).ipaddresses[0]
	I0317 11:07:16.738945    8508 main.go:141] libmachine: [stdout =====>] : 172.25.16.34
	
	I0317 11:07:16.739041    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:07:16.743692    8508 main.go:141] libmachine: Using SSH client type: native
	I0317 11:07:16.744619    8508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.16.34 22 <nil> <nil>}
	I0317 11:07:16.744619    8508 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0317 11:07:19.057620    8508 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0317 11:07:19.057620    8508 machine.go:96] duration metric: took 45.3674278s to provisionDockerMachine
	I0317 11:07:19.057620    8508 client.go:171] duration metric: took 1m56.7755093s to LocalClient.Create
	I0317 11:07:19.057620    8508 start.go:167] duration metric: took 1m56.7756106s to libmachine.API.Create "ha-450500"
	I0317 11:07:19.057620    8508 start.go:293] postStartSetup for "ha-450500" (driver="hyperv")
	I0317 11:07:19.057620    8508 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0317 11:07:19.071834    8508 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0317 11:07:19.071834    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500 ).state
	I0317 11:07:21.191454    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:07:21.191630    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:07:21.191630    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500 ).networkadapters[0]).ipaddresses[0]
	I0317 11:07:23.766217    8508 main.go:141] libmachine: [stdout =====>] : 172.25.16.34
	
	I0317 11:07:23.766404    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:07:23.766877    8508 sshutil.go:53] new ssh client: &{IP:172.25.16.34 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500\id_rsa Username:docker}
	I0317 11:07:23.882290    8508 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8103595s)
	I0317 11:07:23.894382    8508 ssh_runner.go:195] Run: cat /etc/os-release
	I0317 11:07:23.901381    8508 info.go:137] Remote host: Buildroot 2023.02.9
	I0317 11:07:23.901381    8508 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0317 11:07:23.901381    8508 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0317 11:07:23.902983    8508 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem -> 89402.pem in /etc/ssl/certs
	I0317 11:07:23.903096    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem -> /etc/ssl/certs/89402.pem
	I0317 11:07:23.914446    8508 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0317 11:07:23.931326    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem --> /etc/ssl/certs/89402.pem (1708 bytes)
	I0317 11:07:23.981721    8508 start.go:296] duration metric: took 4.924066s for postStartSetup
	I0317 11:07:23.984818    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500 ).state
	I0317 11:07:26.138443    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:07:26.138443    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:07:26.138879    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500 ).networkadapters[0]).ipaddresses[0]
	I0317 11:07:28.670115    8508 main.go:141] libmachine: [stdout =====>] : 172.25.16.34
	
	I0317 11:07:28.670846    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:07:28.671026    8508 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\config.json ...
	I0317 11:07:28.674281    8508 start.go:128] duration metric: took 2m6.3952934s to createHost
	I0317 11:07:28.674366    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500 ).state
	I0317 11:07:30.812607    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:07:30.813135    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:07:30.813234    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500 ).networkadapters[0]).ipaddresses[0]
	I0317 11:07:33.333826    8508 main.go:141] libmachine: [stdout =====>] : 172.25.16.34
	
	I0317 11:07:33.333826    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:07:33.339847    8508 main.go:141] libmachine: Using SSH client type: native
	I0317 11:07:33.340624    8508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.16.34 22 <nil> <nil>}
	I0317 11:07:33.340624    8508 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0317 11:07:33.470448    8508 main.go:141] libmachine: SSH cmd err, output: <nil>: 1742209653.491241352
	
	I0317 11:07:33.470448    8508 fix.go:216] guest clock: 1742209653.491241352
	I0317 11:07:33.470448    8508 fix.go:229] Guest: 2025-03-17 11:07:33.491241352 +0000 UTC Remote: 2025-03-17 11:07:28.6742815 +0000 UTC m=+132.126838901 (delta=4.816959852s)
	I0317 11:07:33.470448    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500 ).state
	I0317 11:07:35.607199    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:07:35.607372    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:07:35.607372    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500 ).networkadapters[0]).ipaddresses[0]
	I0317 11:07:38.265346    8508 main.go:141] libmachine: [stdout =====>] : 172.25.16.34
	
	I0317 11:07:38.265346    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:07:38.275246    8508 main.go:141] libmachine: Using SSH client type: native
	I0317 11:07:38.275246    8508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.16.34 22 <nil> <nil>}
	I0317 11:07:38.275766    8508 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1742209653
	I0317 11:07:38.431790    8508 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Mar 17 11:07:33 UTC 2025
	
	I0317 11:07:38.431904    8508 fix.go:236] clock set: Mon Mar 17 11:07:33 UTC 2025
	 (err=<nil>)
	I0317 11:07:38.431904    8508 start.go:83] releasing machines lock for "ha-450500", held for 2m16.153325s
	I0317 11:07:38.432036    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500 ).state
	I0317 11:07:40.561868    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:07:40.562587    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:07:40.562635    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500 ).networkadapters[0]).ipaddresses[0]
	I0317 11:07:43.063749    8508 main.go:141] libmachine: [stdout =====>] : 172.25.16.34
	
	I0317 11:07:43.063749    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:07:43.068810    8508 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0317 11:07:43.068991    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500 ).state
	I0317 11:07:43.078294    8508 ssh_runner.go:195] Run: cat /version.json
	I0317 11:07:43.078294    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500 ).state
	I0317 11:07:45.304947    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:07:45.304947    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:07:45.304947    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:07:45.304947    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500 ).networkadapters[0]).ipaddresses[0]
	I0317 11:07:45.304947    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:07:45.304947    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500 ).networkadapters[0]).ipaddresses[0]
	I0317 11:07:47.967051    8508 main.go:141] libmachine: [stdout =====>] : 172.25.16.34
	
	I0317 11:07:47.967575    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:07:47.968040    8508 sshutil.go:53] new ssh client: &{IP:172.25.16.34 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500\id_rsa Username:docker}
	I0317 11:07:47.989228    8508 main.go:141] libmachine: [stdout =====>] : 172.25.16.34
	
	I0317 11:07:47.989228    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:07:47.990439    8508 sshutil.go:53] new ssh client: &{IP:172.25.16.34 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500\id_rsa Username:docker}
	I0317 11:07:48.068714    8508 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.9997956s)
	W0317 11:07:48.068844    8508 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0317 11:07:48.085047    8508 ssh_runner.go:235] Completed: cat /version.json: (5.0067167s)
	I0317 11:07:48.097661    8508 ssh_runner.go:195] Run: systemctl --version
	I0317 11:07:48.116635    8508 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0317 11:07:48.126020    8508 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0317 11:07:48.136852    8508 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0317 11:07:48.166553    8508 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0317 11:07:48.166553    8508 start.go:495] detecting cgroup driver to use...
	I0317 11:07:48.166553    8508 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0317 11:07:48.198739    8508 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0317 11:07:48.198739    8508 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0317 11:07:48.212733    8508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0317 11:07:48.246298    8508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0317 11:07:48.265560    8508 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0317 11:07:48.277301    8508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0317 11:07:48.308270    8508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0317 11:07:48.337130    8508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0317 11:07:48.365607    8508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0317 11:07:48.394249    8508 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0317 11:07:48.424358    8508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0317 11:07:48.456129    8508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0317 11:07:48.486380    8508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0317 11:07:48.516837    8508 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0317 11:07:48.533901    8508 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0317 11:07:48.545548    8508 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0317 11:07:48.578255    8508 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0317 11:07:48.609242    8508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 11:07:48.805911    8508 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0317 11:07:48.836988    8508 start.go:495] detecting cgroup driver to use...
	I0317 11:07:48.848439    8508 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0317 11:07:48.882231    8508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0317 11:07:48.918400    8508 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0317 11:07:48.964536    8508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0317 11:07:49.000059    8508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0317 11:07:49.036314    8508 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0317 11:07:49.110424    8508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0317 11:07:49.140013    8508 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0317 11:07:49.188841    8508 ssh_runner.go:195] Run: which cri-dockerd
	I0317 11:07:49.207748    8508 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0317 11:07:49.235265    8508 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0317 11:07:49.287033    8508 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0317 11:07:49.504105    8508 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0317 11:07:49.686378    8508 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0317 11:07:49.686639    8508 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0317 11:07:49.730372    8508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 11:07:49.916415    8508 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0317 11:07:52.514427    8508 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5979935s)
	I0317 11:07:52.525908    8508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0317 11:07:52.560115    8508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0317 11:07:52.596189    8508 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0317 11:07:52.803774    8508 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0317 11:07:53.001991    8508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 11:07:53.197267    8508 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0317 11:07:53.238606    8508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0317 11:07:53.270323    8508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 11:07:53.456084    8508 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0317 11:07:53.566069    8508 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0317 11:07:53.577907    8508 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0317 11:07:53.587726    8508 start.go:563] Will wait 60s for crictl version
	I0317 11:07:53.597916    8508 ssh_runner.go:195] Run: which crictl
	I0317 11:07:53.613740    8508 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0317 11:07:53.666786    8508 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0317 11:07:53.676894    8508 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0317 11:07:53.717705    8508 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0317 11:07:53.757058    8508 out.go:235] * Preparing Kubernetes v1.32.2 on Docker 27.4.0 ...
	I0317 11:07:53.757284    8508 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0317 11:07:53.761648    8508 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0317 11:07:53.761648    8508 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0317 11:07:53.761648    8508 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0317 11:07:53.761648    8508 ip.go:211] Found interface: {Index:13 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:4b:84:d5 Flags:up|broadcast|multicast|running}
	I0317 11:07:53.764814    8508 ip.go:214] interface addr: fe80::f0c7:c31c:6237:ef35/64
	I0317 11:07:53.764814    8508 ip.go:214] interface addr: 172.25.16.1/20
	I0317 11:07:53.777915    8508 ssh_runner.go:195] Run: grep 172.25.16.1	host.minikube.internal$ /etc/hosts
	I0317 11:07:53.783967    8508 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.25.16.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 11:07:53.815608    8508 kubeadm.go:883] updating cluster {Name:ha-450500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-450500 Namespac
e:default APIServerHAVIP:172.25.31.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.16.34 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mo
untUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0317 11:07:53.815608    8508 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0317 11:07:53.823524    8508 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0317 11:07:53.848590    8508 docker.go:689] Got preloaded images: 
	I0317 11:07:53.848590    8508 docker.go:695] registry.k8s.io/kube-apiserver:v1.32.2 wasn't preloaded
	I0317 11:07:53.859692    8508 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0317 11:07:53.886973    8508 ssh_runner.go:195] Run: which lz4
	I0317 11:07:53.894192    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0317 11:07:53.905759    8508 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0317 11:07:53.913037    8508 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0317 11:07:53.913037    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (349803115 bytes)
	I0317 11:07:56.086887    8508 docker.go:653] duration metric: took 2.1926787s to copy over tarball
	I0317 11:07:56.098139    8508 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0317 11:08:04.581762    8508 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.4834842s)
	I0317 11:08:04.581902    8508 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0317 11:08:04.641598    8508 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0317 11:08:04.660043    8508 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0317 11:08:04.703684    8508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 11:08:04.916137    8508 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0317 11:08:08.107535    8508 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.1913743s)
	I0317 11:08:08.118452    8508 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0317 11:08:08.150187    8508 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.32.2
	registry.k8s.io/kube-controller-manager:v1.32.2
	registry.k8s.io/kube-scheduler:v1.32.2
	registry.k8s.io/kube-proxy:v1.32.2
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0317 11:08:08.150187    8508 cache_images.go:84] Images are preloaded, skipping loading
	I0317 11:08:08.150187    8508 kubeadm.go:934] updating node { 172.25.16.34 8443 v1.32.2 docker true true} ...
	I0317 11:08:08.150187    8508 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-450500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.16.34
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:ha-450500 Namespace:default APIServerHAVIP:172.25.31.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0317 11:08:08.159478    8508 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0317 11:08:08.224798    8508 cni.go:84] Creating CNI manager for ""
	I0317 11:08:08.224798    8508 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0317 11:08:08.224798    8508 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0317 11:08:08.224798    8508 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.25.16.34 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-450500 NodeName:ha-450500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.25.16.34"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.25.16.34 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/ma
nifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0317 11:08:08.225789    8508 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.25.16.34
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-450500"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "172.25.16.34"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.25.16.34"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0317 11:08:08.225789    8508 kube-vip.go:115] generating kube-vip config ...
	I0317 11:08:08.236549    8508 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0317 11:08:08.263962    8508 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0317 11:08:08.264219    8508 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.25.31.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.9
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0317 11:08:08.275028    8508 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0317 11:08:08.295667    8508 binaries.go:44] Found k8s binaries, skipping transfer
	I0317 11:08:08.307845    8508 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0317 11:08:08.325114    8508 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0317 11:08:08.355516    8508 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0317 11:08:08.383885    8508 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I0317 11:08:08.415041    8508 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1446 bytes)
	I0317 11:08:08.459141    8508 ssh_runner.go:195] Run: grep 172.25.31.254	control-plane.minikube.internal$ /etc/hosts
	I0317 11:08:08.465567    8508 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.25.31.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 11:08:08.498686    8508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 11:08:08.702254    8508 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 11:08:08.732709    8508 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500 for IP: 172.25.16.34
	I0317 11:08:08.732847    8508 certs.go:194] generating shared ca certs ...
	I0317 11:08:08.732889    8508 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:08:08.733679    8508 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0317 11:08:08.734428    8508 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0317 11:08:08.735134    8508 certs.go:256] generating profile certs ...
	I0317 11:08:08.736005    8508 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\client.key
	I0317 11:08:08.736172    8508 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\client.crt with IP's: []
	I0317 11:08:09.705510    8508 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\client.crt ...
	I0317 11:08:09.705510    8508 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\client.crt: {Name:mk792f6749124d49fe283a3b917333e6f455939f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:08:09.707542    8508 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\client.key ...
	I0317 11:08:09.707542    8508 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\client.key: {Name:mk647a2008ad32a86ebab67a6a73f60ff9f49cf6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:08:09.708213    8508 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.key.a805433c
	I0317 11:08:09.709275    8508 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.crt.a805433c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.25.16.34 172.25.31.254]
	I0317 11:08:09.920893    8508 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.crt.a805433c ...
	I0317 11:08:09.920893    8508 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.crt.a805433c: {Name:mkd850b7327a2bc3127130883e5f1b38083dd5a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:08:09.922619    8508 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.key.a805433c ...
	I0317 11:08:09.922619    8508 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.key.a805433c: {Name:mk75d42a89cfec0612d2f7dcffbd0ccb9e1383fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:08:09.924040    8508 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.crt.a805433c -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.crt
	I0317 11:08:09.937753    8508 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.key.a805433c -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.key
	I0317 11:08:09.939743    8508 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\proxy-client.key
	I0317 11:08:09.939743    8508 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\proxy-client.crt with IP's: []
	I0317 11:08:10.018587    8508 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\proxy-client.crt ...
	I0317 11:08:10.018587    8508 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\proxy-client.crt: {Name:mk28db02829d3ca8191927e42e9af9bbc1f3f5e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:08:10.020694    8508 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\proxy-client.key ...
	I0317 11:08:10.020694    8508 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\proxy-client.key: {Name:mk7f8d2926c5b727595db9114a62364d0fc7349c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:08:10.020980    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0317 11:08:10.022211    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0317 11:08:10.022370    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0317 11:08:10.022563    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0317 11:08:10.022755    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0317 11:08:10.022925    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0317 11:08:10.023114    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0317 11:08:10.032570    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0317 11:08:10.033045    8508 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8940.pem (1338 bytes)
	W0317 11:08:10.033854    8508 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8940_empty.pem, impossibly tiny 0 bytes
	I0317 11:08:10.033854    8508 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0317 11:08:10.034496    8508 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0317 11:08:10.034760    8508 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0317 11:08:10.035045    8508 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0317 11:08:10.035353    8508 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem (1708 bytes)
	I0317 11:08:10.035353    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8940.pem -> /usr/share/ca-certificates/8940.pem
	I0317 11:08:10.036166    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem -> /usr/share/ca-certificates/89402.pem
	I0317 11:08:10.036326    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0317 11:08:10.036482    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0317 11:08:10.088279    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0317 11:08:10.131133    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0317 11:08:10.176329    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0317 11:08:10.218354    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0317 11:08:10.265277    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0317 11:08:10.306874    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0317 11:08:10.351251    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0317 11:08:10.401290    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8940.pem --> /usr/share/ca-certificates/8940.pem (1338 bytes)
	I0317 11:08:10.451505    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem --> /usr/share/ca-certificates/89402.pem (1708 bytes)
	I0317 11:08:10.498103    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0317 11:08:10.543737    8508 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0317 11:08:10.589659    8508 ssh_runner.go:195] Run: openssl version
	I0317 11:08:10.608987    8508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8940.pem && ln -fs /usr/share/ca-certificates/8940.pem /etc/ssl/certs/8940.pem"
	I0317 11:08:10.639956    8508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8940.pem
	I0317 11:08:10.647256    8508 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 17 10:46 /usr/share/ca-certificates/8940.pem
	I0317 11:08:10.657878    8508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8940.pem
	I0317 11:08:10.680550    8508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8940.pem /etc/ssl/certs/51391683.0"
	I0317 11:08:10.710433    8508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/89402.pem && ln -fs /usr/share/ca-certificates/89402.pem /etc/ssl/certs/89402.pem"
	I0317 11:08:10.742685    8508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/89402.pem
	I0317 11:08:10.749737    8508 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 17 10:46 /usr/share/ca-certificates/89402.pem
	I0317 11:08:10.760758    8508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/89402.pem
	I0317 11:08:10.780972    8508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/89402.pem /etc/ssl/certs/3ec20f2e.0"
	I0317 11:08:10.811824    8508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0317 11:08:10.842803    8508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0317 11:08:10.850081    8508 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 17 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0317 11:08:10.861526    8508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0317 11:08:10.885418    8508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0317 11:08:10.915356    8508 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0317 11:08:10.925707    8508 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0317 11:08:10.926224    8508 kubeadm.go:392] StartCluster: {Name:ha-450500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-450500 Namespace:d
efault APIServerHAVIP:172.25.31.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.16.34 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 11:08:10.935184    8508 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0317 11:08:10.971556    8508 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0317 11:08:11.002448    8508 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0317 11:08:11.038158    8508 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0317 11:08:11.062175    8508 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0317 11:08:11.062271    8508 kubeadm.go:157] found existing configuration files:
	
	I0317 11:08:11.073756    8508 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0317 11:08:11.095067    8508 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0317 11:08:11.109979    8508 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0317 11:08:11.140209    8508 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0317 11:08:11.157259    8508 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0317 11:08:11.168957    8508 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0317 11:08:11.200732    8508 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0317 11:08:11.222130    8508 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0317 11:08:11.234462    8508 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0317 11:08:11.263987    8508 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0317 11:08:11.284038    8508 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0317 11:08:11.295071    8508 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0317 11:08:11.313654    8508 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0317 11:08:11.789699    8508 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0317 11:08:26.460567    8508 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0317 11:08:26.460686    8508 kubeadm.go:310] [preflight] Running pre-flight checks
	I0317 11:08:26.460772    8508 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0317 11:08:26.460960    8508 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0317 11:08:26.461276    8508 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0317 11:08:26.461428    8508 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0317 11:08:26.468407    8508 out.go:235]   - Generating certificates and keys ...
	I0317 11:08:26.468660    8508 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0317 11:08:26.468776    8508 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0317 11:08:26.468890    8508 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0317 11:08:26.468890    8508 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0317 11:08:26.468890    8508 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0317 11:08:26.469498    8508 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0317 11:08:26.469621    8508 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0317 11:08:26.469989    8508 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-450500 localhost] and IPs [172.25.16.34 127.0.0.1 ::1]
	I0317 11:08:26.470311    8508 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0317 11:08:26.470495    8508 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-450500 localhost] and IPs [172.25.16.34 127.0.0.1 ::1]
	I0317 11:08:26.470495    8508 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0317 11:08:26.470495    8508 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0317 11:08:26.471051    8508 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0317 11:08:26.471234    8508 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0317 11:08:26.471368    8508 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0317 11:08:26.471415    8508 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0317 11:08:26.471415    8508 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0317 11:08:26.471415    8508 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0317 11:08:26.471415    8508 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0317 11:08:26.472163    8508 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0317 11:08:26.472405    8508 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0317 11:08:26.477862    8508 out.go:235]   - Booting up control plane ...
	I0317 11:08:26.477887    8508 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0317 11:08:26.477887    8508 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0317 11:08:26.477887    8508 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0317 11:08:26.478628    8508 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0317 11:08:26.478960    8508 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0317 11:08:26.479042    8508 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0317 11:08:26.479496    8508 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0317 11:08:26.479599    8508 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0317 11:08:26.479599    8508 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001798459s
	I0317 11:08:26.479599    8508 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0317 11:08:26.480161    8508 kubeadm.go:310] [api-check] The API server is healthy after 8.502452388s
	I0317 11:08:26.480419    8508 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0317 11:08:26.480419    8508 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0317 11:08:26.480419    8508 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0317 11:08:26.480995    8508 kubeadm.go:310] [mark-control-plane] Marking the node ha-450500 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0317 11:08:26.481134    8508 kubeadm.go:310] [bootstrap-token] Using token: is9sac.0uzmczoyhbxhsua1
	I0317 11:08:26.499289    8508 out.go:235]   - Configuring RBAC rules ...
	I0317 11:08:26.500534    8508 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0317 11:08:26.500726    8508 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0317 11:08:26.501093    8508 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0317 11:08:26.501429    8508 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0317 11:08:26.501429    8508 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0317 11:08:26.502141    8508 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0317 11:08:26.502357    8508 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0317 11:08:26.502357    8508 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0317 11:08:26.502684    8508 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0317 11:08:26.502730    8508 kubeadm.go:310] 
	I0317 11:08:26.502730    8508 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0317 11:08:26.502730    8508 kubeadm.go:310] 
	I0317 11:08:26.502730    8508 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0317 11:08:26.502730    8508 kubeadm.go:310] 
	I0317 11:08:26.503261    8508 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0317 11:08:26.503397    8508 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0317 11:08:26.503508    8508 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0317 11:08:26.503508    8508 kubeadm.go:310] 
	I0317 11:08:26.503613    8508 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0317 11:08:26.503613    8508 kubeadm.go:310] 
	I0317 11:08:26.503737    8508 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0317 11:08:26.503737    8508 kubeadm.go:310] 
	I0317 11:08:26.503737    8508 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0317 11:08:26.503737    8508 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0317 11:08:26.503737    8508 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0317 11:08:26.503737    8508 kubeadm.go:310] 
	I0317 11:08:26.503737    8508 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0317 11:08:26.503737    8508 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0317 11:08:26.503737    8508 kubeadm.go:310] 
	I0317 11:08:26.503737    8508 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token is9sac.0uzmczoyhbxhsua1 \
	I0317 11:08:26.503737    8508 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c322b0259bb8a6b4c6c1dc77ade13bbf0d2f6b9bd2605c58fcd3743199330256 \
	I0317 11:08:26.503737    8508 kubeadm.go:310] 	--control-plane 
	I0317 11:08:26.503737    8508 kubeadm.go:310] 
	I0317 11:08:26.505311    8508 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0317 11:08:26.505311    8508 kubeadm.go:310] 
	I0317 11:08:26.505571    8508 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token is9sac.0uzmczoyhbxhsua1 \
	I0317 11:08:26.505571    8508 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c322b0259bb8a6b4c6c1dc77ade13bbf0d2f6b9bd2605c58fcd3743199330256 
	I0317 11:08:26.505571    8508 cni.go:84] Creating CNI manager for ""
	I0317 11:08:26.505571    8508 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0317 11:08:26.508839    8508 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0317 11:08:26.523761    8508 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0317 11:08:26.531405    8508 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0317 11:08:26.531405    8508 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0317 11:08:26.582932    8508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0317 11:08:27.384052    8508 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0317 11:08:27.398229    8508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:08:27.399238    8508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-450500 minikube.k8s.io/updated_at=2025_03_17T11_08_27_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=28b3ce799b018a38b7c40f89b465976263272e76 minikube.k8s.io/name=ha-450500 minikube.k8s.io/primary=true
	I0317 11:08:27.413886    8508 ops.go:34] apiserver oom_adj: -16
	I0317 11:08:27.607964    8508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:08:28.108863    8508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:08:28.606618    8508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:08:29.109015    8508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:08:29.607145    8508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:08:30.107300    8508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 11:08:30.225632    8508 kubeadm.go:1113] duration metric: took 2.841509s to wait for elevateKubeSystemPrivileges
	I0317 11:08:30.225632    8508 kubeadm.go:394] duration metric: took 19.299267s to StartCluster
	I0317 11:08:30.225632    8508 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:08:30.225632    8508 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0317 11:08:30.228894    8508 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:08:30.231546    8508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0317 11:08:30.231672    8508 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.25.16.34 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0317 11:08:30.231672    8508 start.go:241] waiting for startup goroutines ...
	I0317 11:08:30.231672    8508 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0317 11:08:30.231838    8508 addons.go:69] Setting storage-provisioner=true in profile "ha-450500"
	I0317 11:08:30.231893    8508 addons.go:69] Setting default-storageclass=true in profile "ha-450500"
	I0317 11:08:30.231947    8508 addons.go:238] Setting addon storage-provisioner=true in "ha-450500"
	I0317 11:08:30.231996    8508 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-450500"
	I0317 11:08:30.232059    8508 host.go:66] Checking if "ha-450500" exists ...
	I0317 11:08:30.232059    8508 config.go:182] Loaded profile config "ha-450500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0317 11:08:30.233192    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500 ).state
	I0317 11:08:30.233827    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500 ).state
	I0317 11:08:30.415922    8508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.25.16.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0317 11:08:30.929641    8508 start.go:971] {"host.minikube.internal": 172.25.16.1} host record injected into CoreDNS's ConfigMap
	I0317 11:08:32.576300    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:08:32.577291    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:08:32.580666    8508 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0317 11:08:32.582645    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:08:32.582645    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:08:32.583286    8508 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0317 11:08:32.583286    8508 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0317 11:08:32.583286    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500 ).state
	I0317 11:08:32.583924    8508 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0317 11:08:32.584929    8508 kapi.go:59] client config for ha-450500: &rest.Config{Host:"https://172.25.31.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-450500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-450500\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2e292e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0317 11:08:32.586383    8508 cert_rotation.go:140] Starting client certificate rotation controller
	I0317 11:08:32.586383    8508 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0317 11:08:32.586910    8508 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0317 11:08:32.586910    8508 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0317 11:08:32.586949    8508 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0317 11:08:32.587426    8508 addons.go:238] Setting addon default-storageclass=true in "ha-450500"
	I0317 11:08:32.587463    8508 host.go:66] Checking if "ha-450500" exists ...
	I0317 11:08:32.588904    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500 ).state
	I0317 11:08:34.974372    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:08:34.974372    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:08:34.974372    8508 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0317 11:08:34.974372    8508 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0317 11:08:34.974372    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500 ).state
	I0317 11:08:35.095004    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:08:35.095004    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:08:35.096158    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500 ).networkadapters[0]).ipaddresses[0]
	I0317 11:08:37.297268    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:08:37.297268    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:08:37.297268    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500 ).networkadapters[0]).ipaddresses[0]
	I0317 11:08:37.893766    8508 main.go:141] libmachine: [stdout =====>] : 172.25.16.34
	
	I0317 11:08:37.893830    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:08:37.893830    8508 sshutil.go:53] new ssh client: &{IP:172.25.16.34 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500\id_rsa Username:docker}
	I0317 11:08:38.061352    8508 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0317 11:08:40.047885    8508 main.go:141] libmachine: [stdout =====>] : 172.25.16.34
	
	I0317 11:08:40.047885    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:08:40.047885    8508 sshutil.go:53] new ssh client: &{IP:172.25.16.34 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500\id_rsa Username:docker}
	I0317 11:08:40.187407    8508 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0317 11:08:40.331042    8508 round_trippers.go:470] GET https://172.25.31.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0317 11:08:40.331042    8508 round_trippers.go:476] Request Headers:
	I0317 11:08:40.331042    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:08:40.331042    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:08:40.343618    8508 round_trippers.go:581] Response Status: 200 OK in 12 milliseconds
	I0317 11:08:40.343618    8508 round_trippers.go:470] PUT https://172.25.31.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0317 11:08:40.343618    8508 round_trippers.go:476] Request Headers:
	I0317 11:08:40.343618    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:08:40.343618    8508 round_trippers.go:480]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 11:08:40.343618    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:08:40.348206    8508 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 11:08:40.351924    8508 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0317 11:08:40.355964    8508 addons.go:514] duration metric: took 10.1242178s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0317 11:08:40.356178    8508 start.go:246] waiting for cluster config update ...
	I0317 11:08:40.356178    8508 start.go:255] writing updated cluster config ...
	I0317 11:08:40.359475    8508 out.go:201] 
	I0317 11:08:40.373501    8508 config.go:182] Loaded profile config "ha-450500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0317 11:08:40.373664    8508 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\config.json ...
	I0317 11:08:40.379719    8508 out.go:177] * Starting "ha-450500-m02" control-plane node in "ha-450500" cluster
	I0317 11:08:40.384727    8508 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0317 11:08:40.384727    8508 cache.go:56] Caching tarball of preloaded images
	I0317 11:08:40.384727    8508 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0317 11:08:40.384727    8508 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on docker
	I0317 11:08:40.384727    8508 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\config.json ...
	I0317 11:08:40.389763    8508 start.go:360] acquireMachinesLock for ha-450500-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0317 11:08:40.389763    8508 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-450500-m02"
	I0317 11:08:40.390757    8508 start.go:93] Provisioning new machine with config: &{Name:ha-450500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName
:ha-450500 Namespace:default APIServerHAVIP:172.25.31.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.16.34 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0317 11:08:40.390757    8508 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0317 11:08:40.398762    8508 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0317 11:08:40.398762    8508 start.go:159] libmachine.API.Create for "ha-450500" (driver="hyperv")
	I0317 11:08:40.398762    8508 client.go:168] LocalClient.Create starting
	I0317 11:08:40.398762    8508 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0317 11:08:40.399752    8508 main.go:141] libmachine: Decoding PEM data...
	I0317 11:08:40.399752    8508 main.go:141] libmachine: Parsing certificate...
	I0317 11:08:40.399752    8508 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0317 11:08:40.399752    8508 main.go:141] libmachine: Decoding PEM data...
	I0317 11:08:40.399752    8508 main.go:141] libmachine: Parsing certificate...
	I0317 11:08:40.399752    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0317 11:08:42.280274    8508 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0317 11:08:42.280559    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:08:42.280559    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0317 11:08:44.005478    8508 main.go:141] libmachine: [stdout =====>] : False
	
	I0317 11:08:44.005478    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:08:44.005478    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0317 11:08:45.497998    8508 main.go:141] libmachine: [stdout =====>] : True
	
	I0317 11:08:45.497998    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:08:45.498300    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0317 11:08:49.186567    8508 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0317 11:08:49.186567    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:08:49.189822    8508 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0317 11:08:49.727014    8508 main.go:141] libmachine: Creating SSH key...
	I0317 11:08:50.391236    8508 main.go:141] libmachine: Creating VM...
	I0317 11:08:50.391236    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0317 11:08:53.320458    8508 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0317 11:08:53.320458    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:08:53.320684    8508 main.go:141] libmachine: Using switch "Default Switch"
	I0317 11:08:53.320684    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0317 11:08:55.132226    8508 main.go:141] libmachine: [stdout =====>] : True
	
	I0317 11:08:55.132226    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:08:55.132226    8508 main.go:141] libmachine: Creating VHD
	I0317 11:08:55.132226    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0317 11:08:58.997547    8508 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 433DC87A-8DF4-4BBE-8DA4-9CCBCB4F2077
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0317 11:08:58.997622    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:08:58.997622    8508 main.go:141] libmachine: Writing magic tar header
	I0317 11:08:58.997697    8508 main.go:141] libmachine: Writing SSH key tar header
	I0317 11:08:59.010563    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0317 11:09:02.215149    8508 main.go:141] libmachine: [stdout =====>] : 
	I0317 11:09:02.215149    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:09:02.215417    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500-m02\disk.vhd' -SizeBytes 20000MB
	I0317 11:09:04.761419    8508 main.go:141] libmachine: [stdout =====>] : 
	I0317 11:09:04.761419    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:09:04.762289    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-450500-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0317 11:09:08.448421    8508 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-450500-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0317 11:09:08.448421    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:09:08.448979    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-450500-m02 -DynamicMemoryEnabled $false
	I0317 11:09:10.727631    8508 main.go:141] libmachine: [stdout =====>] : 
	I0317 11:09:10.728647    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:09:10.728766    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-450500-m02 -Count 2
	I0317 11:09:12.920580    8508 main.go:141] libmachine: [stdout =====>] : 
	I0317 11:09:12.921464    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:09:12.921464    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-450500-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500-m02\boot2docker.iso'
	I0317 11:09:15.546850    8508 main.go:141] libmachine: [stdout =====>] : 
	I0317 11:09:15.547848    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:09:15.547900    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-450500-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500-m02\disk.vhd'
	I0317 11:09:18.284116    8508 main.go:141] libmachine: [stdout =====>] : 
	I0317 11:09:18.284116    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:09:18.285005    8508 main.go:141] libmachine: Starting VM...
	I0317 11:09:18.285005    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-450500-m02
	I0317 11:09:21.463148    8508 main.go:141] libmachine: [stdout =====>] : 
	I0317 11:09:21.463148    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:09:21.463148    8508 main.go:141] libmachine: Waiting for host to start...
	I0317 11:09:21.463148    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m02 ).state
	I0317 11:09:23.824924    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:09:23.824989    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:09:23.825068    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 11:09:26.470325    8508 main.go:141] libmachine: [stdout =====>] : 
	I0317 11:09:26.470325    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:09:27.471278    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m02 ).state
	I0317 11:09:29.773843    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:09:29.773843    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:09:29.774365    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 11:09:32.389518    8508 main.go:141] libmachine: [stdout =====>] : 
	I0317 11:09:32.390514    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:09:33.391692    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m02 ).state
	I0317 11:09:35.611106    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:09:35.611709    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:09:35.611709    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 11:09:38.232452    8508 main.go:141] libmachine: [stdout =====>] : 
	I0317 11:09:38.232452    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:09:39.233263    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m02 ).state
	I0317 11:09:41.477347    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:09:41.477347    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:09:41.477347    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 11:09:44.078632    8508 main.go:141] libmachine: [stdout =====>] : 
	I0317 11:09:44.078632    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:09:45.079740    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m02 ).state
	I0317 11:09:47.325663    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:09:47.325663    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:09:47.325663    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 11:09:49.964742    8508 main.go:141] libmachine: [stdout =====>] : 172.25.21.189
	
	I0317 11:09:49.964742    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:09:49.965463    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m02 ).state
	I0317 11:09:52.118206    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:09:52.119001    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:09:52.119001    8508 machine.go:93] provisionDockerMachine start ...
	I0317 11:09:52.119145    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m02 ).state
	I0317 11:09:54.297468    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:09:54.297468    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:09:54.298262    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 11:09:56.867767    8508 main.go:141] libmachine: [stdout =====>] : 172.25.21.189
	
	I0317 11:09:56.867767    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:09:56.874060    8508 main.go:141] libmachine: Using SSH client type: native
	I0317 11:09:56.889966    8508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.21.189 22 <nil> <nil>}
	I0317 11:09:56.890108    8508 main.go:141] libmachine: About to run SSH command:
	hostname
	I0317 11:09:57.025425    8508 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0317 11:09:57.025535    8508 buildroot.go:166] provisioning hostname "ha-450500-m02"
	I0317 11:09:57.025535    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m02 ).state
	I0317 11:09:59.150822    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:09:59.151654    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:09:59.151817    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 11:10:01.694717    8508 main.go:141] libmachine: [stdout =====>] : 172.25.21.189
	
	I0317 11:10:01.694717    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:10:01.700683    8508 main.go:141] libmachine: Using SSH client type: native
	I0317 11:10:01.701352    8508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.21.189 22 <nil> <nil>}
	I0317 11:10:01.701352    8508 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-450500-m02 && echo "ha-450500-m02" | sudo tee /etc/hostname
	I0317 11:10:01.871427    8508 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-450500-m02
	
	I0317 11:10:01.872028    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m02 ).state
	I0317 11:10:03.997693    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:10:03.997693    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:10:03.998030    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 11:10:06.544339    8508 main.go:141] libmachine: [stdout =====>] : 172.25.21.189
	
	I0317 11:10:06.545038    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:10:06.550986    8508 main.go:141] libmachine: Using SSH client type: native
	I0317 11:10:06.551323    8508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.21.189 22 <nil> <nil>}
	I0317 11:10:06.551323    8508 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-450500-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-450500-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-450500-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0317 11:10:06.700282    8508 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0317 11:10:06.700391    8508 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0317 11:10:06.700391    8508 buildroot.go:174] setting up certificates
	I0317 11:10:06.700391    8508 provision.go:84] configureAuth start
	I0317 11:10:06.700494    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m02 ).state
	I0317 11:10:08.844761    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:10:08.844820    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:10:08.844820    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 11:10:11.404801    8508 main.go:141] libmachine: [stdout =====>] : 172.25.21.189
	
	I0317 11:10:11.404801    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:10:11.404801    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m02 ).state
	I0317 11:10:13.535299    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:10:13.535600    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:10:13.535730    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 11:10:16.079207    8508 main.go:141] libmachine: [stdout =====>] : 172.25.21.189
	
	I0317 11:10:16.079608    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:10:16.079889    8508 provision.go:143] copyHostCerts
	I0317 11:10:16.079889    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0317 11:10:16.079889    8508 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0317 11:10:16.079889    8508 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0317 11:10:16.080685    8508 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0317 11:10:16.082381    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0317 11:10:16.082381    8508 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0317 11:10:16.082381    8508 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0317 11:10:16.082972    8508 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0317 11:10:16.084083    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0317 11:10:16.084241    8508 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0317 11:10:16.084241    8508 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0317 11:10:16.084769    8508 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0317 11:10:16.085470    8508 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-450500-m02 san=[127.0.0.1 172.25.21.189 ha-450500-m02 localhost minikube]
	I0317 11:10:16.347143    8508 provision.go:177] copyRemoteCerts
	I0317 11:10:16.357740    8508 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0317 11:10:16.357740    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m02 ).state
	I0317 11:10:18.511269    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:10:18.511269    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:10:18.511703    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 11:10:21.098129    8508 main.go:141] libmachine: [stdout =====>] : 172.25.21.189
	
	I0317 11:10:21.098129    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:10:21.098758    8508 sshutil.go:53] new ssh client: &{IP:172.25.21.189 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500-m02\id_rsa Username:docker}
	I0317 11:10:21.218417    8508 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8606408s)
	I0317 11:10:21.219150    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0317 11:10:21.219673    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0317 11:10:21.267931    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0317 11:10:21.268087    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0317 11:10:21.315290    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0317 11:10:21.315780    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0317 11:10:21.361115    8508 provision.go:87] duration metric: took 14.6606167s to configureAuth
	I0317 11:10:21.361185    8508 buildroot.go:189] setting minikube options for container-runtime
	I0317 11:10:21.361961    8508 config.go:182] Loaded profile config "ha-450500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0317 11:10:21.362239    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m02 ).state
	I0317 11:10:23.477735    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:10:23.477735    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:10:23.477977    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 11:10:25.987084    8508 main.go:141] libmachine: [stdout =====>] : 172.25.21.189
	
	I0317 11:10:25.988072    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:10:25.992687    8508 main.go:141] libmachine: Using SSH client type: native
	I0317 11:10:25.993434    8508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.21.189 22 <nil> <nil>}
	I0317 11:10:25.993504    8508 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0317 11:10:26.143222    8508 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0317 11:10:26.143292    8508 buildroot.go:70] root file system type: tmpfs
	I0317 11:10:26.143486    8508 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0317 11:10:26.143574    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m02 ).state
	I0317 11:10:28.309612    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:10:28.309612    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:10:28.310386    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 11:10:30.898185    8508 main.go:141] libmachine: [stdout =====>] : 172.25.21.189
	
	I0317 11:10:30.898185    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:10:30.904890    8508 main.go:141] libmachine: Using SSH client type: native
	I0317 11:10:30.905640    8508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.21.189 22 <nil> <nil>}
	I0317 11:10:30.905640    8508 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.25.16.34"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0317 11:10:31.078166    8508 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.25.16.34
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0317 11:10:31.078166    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m02 ).state
	I0317 11:10:33.244772    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:10:33.245420    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:10:33.245566    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 11:10:35.795882    8508 main.go:141] libmachine: [stdout =====>] : 172.25.21.189
	
	I0317 11:10:35.795882    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:10:35.800061    8508 main.go:141] libmachine: Using SSH client type: native
	I0317 11:10:35.800835    8508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.21.189 22 <nil> <nil>}
	I0317 11:10:35.800835    8508 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0317 11:10:38.093045    8508 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0317 11:10:38.093128    8508 machine.go:96] duration metric: took 45.9737901s to provisionDockerMachine
	I0317 11:10:38.093128    8508 client.go:171] duration metric: took 1m57.6935059s to LocalClient.Create
	I0317 11:10:38.093128    8508 start.go:167] duration metric: took 1m57.6935059s to libmachine.API.Create "ha-450500"
	I0317 11:10:38.093128    8508 start.go:293] postStartSetup for "ha-450500-m02" (driver="hyperv")
	I0317 11:10:38.093262    8508 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0317 11:10:38.105541    8508 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0317 11:10:38.105541    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m02 ).state
	I0317 11:10:40.334763    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:10:40.334763    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:10:40.334763    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 11:10:42.858740    8508 main.go:141] libmachine: [stdout =====>] : 172.25.21.189
	
	I0317 11:10:42.858740    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:10:42.859553    8508 sshutil.go:53] new ssh client: &{IP:172.25.21.189 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500-m02\id_rsa Username:docker}
	I0317 11:10:42.978688    8508 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8731109s)
	I0317 11:10:42.991063    8508 ssh_runner.go:195] Run: cat /etc/os-release
	I0317 11:10:42.997925    8508 info.go:137] Remote host: Buildroot 2023.02.9
	I0317 11:10:42.997925    8508 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0317 11:10:42.998421    8508 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0317 11:10:42.999417    8508 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem -> 89402.pem in /etc/ssl/certs
	I0317 11:10:42.999481    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem -> /etc/ssl/certs/89402.pem
	I0317 11:10:43.010619    8508 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0317 11:10:43.031199    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem --> /etc/ssl/certs/89402.pem (1708 bytes)
	I0317 11:10:43.080473    8508 start.go:296] duration metric: took 4.9871982s for postStartSetup
	I0317 11:10:43.083859    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m02 ).state
	I0317 11:10:45.210865    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:10:45.210865    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:10:45.211441    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 11:10:47.733190    8508 main.go:141] libmachine: [stdout =====>] : 172.25.21.189
	
	I0317 11:10:47.733733    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:10:47.734026    8508 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\config.json ...
	I0317 11:10:47.736302    8508 start.go:128] duration metric: took 2m7.3446128s to createHost
	I0317 11:10:47.736302    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m02 ).state
	I0317 11:10:49.863250    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:10:49.863410    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:10:49.863410    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 11:10:52.465464    8508 main.go:141] libmachine: [stdout =====>] : 172.25.21.189
	
	I0317 11:10:52.465464    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:10:52.472121    8508 main.go:141] libmachine: Using SSH client type: native
	I0317 11:10:52.472839    8508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.21.189 22 <nil> <nil>}
	I0317 11:10:52.472839    8508 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0317 11:10:52.620917    8508 main.go:141] libmachine: SSH cmd err, output: <nil>: 1742209852.642347342
	
	I0317 11:10:52.620917    8508 fix.go:216] guest clock: 1742209852.642347342
	I0317 11:10:52.620917    8508 fix.go:229] Guest: 2025-03-17 11:10:52.642347342 +0000 UTC Remote: 2025-03-17 11:10:47.7363023 +0000 UTC m=+331.187404701 (delta=4.906045042s)
	I0317 11:10:52.621459    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m02 ).state
	I0317 11:10:54.750504    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:10:54.750707    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:10:54.750784    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 11:10:57.317146    8508 main.go:141] libmachine: [stdout =====>] : 172.25.21.189
	
	I0317 11:10:57.318125    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:10:57.324084    8508 main.go:141] libmachine: Using SSH client type: native
	I0317 11:10:57.324902    8508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.21.189 22 <nil> <nil>}
	I0317 11:10:57.324902    8508 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1742209852
	I0317 11:10:57.486424    8508 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Mar 17 11:10:52 UTC 2025
	
	I0317 11:10:57.486424    8508 fix.go:236] clock set: Mon Mar 17 11:10:52 UTC 2025
	 (err=<nil>)
	I0317 11:10:57.486424    8508 start.go:83] releasing machines lock for "ha-450500-m02", held for 2m17.0956571s
	I0317 11:10:57.486424    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m02 ).state
	I0317 11:10:59.617449    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:10:59.618417    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:10:59.618559    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 11:11:02.354004    8508 main.go:141] libmachine: [stdout =====>] : 172.25.21.189
	
	I0317 11:11:02.354744    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:11:02.363332    8508 out.go:177] * Found network options:
	I0317 11:11:02.367099    8508 out.go:177]   - NO_PROXY=172.25.16.34
	W0317 11:11:02.370435    8508 proxy.go:119] fail to check proxy env: Error ip not in block
	I0317 11:11:02.373194    8508 out.go:177]   - NO_PROXY=172.25.16.34
	W0317 11:11:02.375727    8508 proxy.go:119] fail to check proxy env: Error ip not in block
	W0317 11:11:02.377217    8508 proxy.go:119] fail to check proxy env: Error ip not in block
	I0317 11:11:02.379815    8508 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0317 11:11:02.379815    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m02 ).state
	I0317 11:11:02.392967    8508 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0317 11:11:02.392967    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m02 ).state
	I0317 11:11:04.732836    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:11:04.732836    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:11:04.732836    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 11:11:04.732836    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:11:04.732836    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:11:04.732836    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 11:11:07.513459    8508 main.go:141] libmachine: [stdout =====>] : 172.25.21.189
	
	I0317 11:11:07.513765    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:11:07.513765    8508 sshutil.go:53] new ssh client: &{IP:172.25.21.189 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500-m02\id_rsa Username:docker}
	I0317 11:11:07.535407    8508 main.go:141] libmachine: [stdout =====>] : 172.25.21.189
	
	I0317 11:11:07.535464    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:11:07.536241    8508 sshutil.go:53] new ssh client: &{IP:172.25.21.189 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500-m02\id_rsa Username:docker}
	I0317 11:11:07.611515    8508 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.231605s)
	W0317 11:11:07.611589    8508 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0317 11:11:07.630055    8508 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.2370487s)
	W0317 11:11:07.630055    8508 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0317 11:11:07.642471    8508 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0317 11:11:07.675231    8508 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0317 11:11:07.675231    8508 start.go:495] detecting cgroup driver to use...
	I0317 11:11:07.675231    8508 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0317 11:11:07.722168    8508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0317 11:11:07.754558    8508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W0317 11:11:07.755586    8508 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0317 11:11:07.755586    8508 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0317 11:11:07.775578    8508 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0317 11:11:07.786068    8508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0317 11:11:07.815746    8508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0317 11:11:07.849582    8508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0317 11:11:07.879688    8508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0317 11:11:07.909914    8508 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0317 11:11:07.941343    8508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0317 11:11:07.973124    8508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0317 11:11:08.006601    8508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0317 11:11:08.036154    8508 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0317 11:11:08.054170    8508 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0317 11:11:08.065562    8508 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0317 11:11:08.102038    8508 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0317 11:11:08.139383    8508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 11:11:08.336947    8508 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0317 11:11:08.373542    8508 start.go:495] detecting cgroup driver to use...
	I0317 11:11:08.387701    8508 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0317 11:11:08.427461    8508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0317 11:11:08.459268    8508 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0317 11:11:08.500892    8508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0317 11:11:08.538038    8508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0317 11:11:08.577227    8508 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0317 11:11:08.647219    8508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0317 11:11:08.674116    8508 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0317 11:11:08.724662    8508 ssh_runner.go:195] Run: which cri-dockerd
	I0317 11:11:08.740690    8508 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0317 11:11:08.756236    8508 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0317 11:11:08.799521    8508 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0317 11:11:09.002660    8508 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0317 11:11:09.194240    8508 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0317 11:11:09.194320    8508 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0317 11:11:09.239817    8508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 11:11:09.443257    8508 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0317 11:11:12.046877    8508 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6036s)
	I0317 11:11:12.058868    8508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0317 11:11:12.098701    8508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0317 11:11:12.141482    8508 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0317 11:11:12.339695    8508 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0317 11:11:12.551321    8508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 11:11:12.754154    8508 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0317 11:11:12.794725    8508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0317 11:11:12.829972    8508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 11:11:13.035377    8508 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0317 11:11:13.145802    8508 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0317 11:11:13.157487    8508 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0317 11:11:13.166241    8508 start.go:563] Will wait 60s for crictl version
	I0317 11:11:13.179118    8508 ssh_runner.go:195] Run: which crictl
	I0317 11:11:13.199444    8508 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0317 11:11:13.264554    8508 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0317 11:11:13.275164    8508 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0317 11:11:13.323695    8508 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0317 11:11:13.376760    8508 out.go:235] * Preparing Kubernetes v1.32.2 on Docker 27.4.0 ...
	I0317 11:11:13.380719    8508 out.go:177]   - env NO_PROXY=172.25.16.34
	I0317 11:11:13.383197    8508 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0317 11:11:13.386693    8508 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0317 11:11:13.386693    8508 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0317 11:11:13.386693    8508 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0317 11:11:13.386693    8508 ip.go:211] Found interface: {Index:13 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:4b:84:d5 Flags:up|broadcast|multicast|running}
	I0317 11:11:13.389636    8508 ip.go:214] interface addr: fe80::f0c7:c31c:6237:ef35/64
	I0317 11:11:13.389636    8508 ip.go:214] interface addr: 172.25.16.1/20
	I0317 11:11:13.402844    8508 ssh_runner.go:195] Run: grep 172.25.16.1	host.minikube.internal$ /etc/hosts
	I0317 11:11:13.409210    8508 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.25.16.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 11:11:13.436734    8508 mustload.go:65] Loading cluster: ha-450500
	I0317 11:11:13.437452    8508 config.go:182] Loaded profile config "ha-450500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0317 11:11:13.437724    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500 ).state
	I0317 11:11:15.584271    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:11:15.584271    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:11:15.584271    8508 host.go:66] Checking if "ha-450500" exists ...
	I0317 11:11:15.585455    8508 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500 for IP: 172.25.21.189
	I0317 11:11:15.585513    8508 certs.go:194] generating shared ca certs ...
	I0317 11:11:15.585540    8508 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:11:15.586118    8508 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0317 11:11:15.586473    8508 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0317 11:11:15.586473    8508 certs.go:256] generating profile certs ...
	I0317 11:11:15.587438    8508 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\client.key
	I0317 11:11:15.587438    8508 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.key.b5f43119
	I0317 11:11:15.587438    8508 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.crt.b5f43119 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.25.16.34 172.25.21.189 172.25.31.254]
	I0317 11:11:15.855076    8508 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.crt.b5f43119 ...
	I0317 11:11:15.855076    8508 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.crt.b5f43119: {Name:mk30b3f325c53c61260398379690859ae7d2df8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:11:15.857179    8508 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.key.b5f43119 ...
	I0317 11:11:15.857179    8508 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.key.b5f43119: {Name:mke75f3701be7cd8ecc8e9e9772462479c9067b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:11:15.858609    8508 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.crt.b5f43119 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.crt
	I0317 11:11:15.873747    8508 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.key.b5f43119 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.key
	I0317 11:11:15.875408    8508 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\proxy-client.key
	I0317 11:11:15.875408    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0317 11:11:15.876100    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0317 11:11:15.876279    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0317 11:11:15.876507    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0317 11:11:15.876507    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0317 11:11:15.876507    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0317 11:11:15.876507    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0317 11:11:15.876507    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0317 11:11:15.877837    8508 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8940.pem (1338 bytes)
	W0317 11:11:15.878241    8508 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8940_empty.pem, impossibly tiny 0 bytes
	I0317 11:11:15.878706    8508 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0317 11:11:15.879330    8508 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0317 11:11:15.879682    8508 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0317 11:11:15.879682    8508 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0317 11:11:15.880493    8508 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem (1708 bytes)
	I0317 11:11:15.880795    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0317 11:11:15.880975    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8940.pem -> /usr/share/ca-certificates/8940.pem
	I0317 11:11:15.881248    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem -> /usr/share/ca-certificates/89402.pem
	I0317 11:11:15.881490    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500 ).state
	I0317 11:11:18.065942    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:11:18.066783    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:11:18.066783    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500 ).networkadapters[0]).ipaddresses[0]
	I0317 11:11:20.653806    8508 main.go:141] libmachine: [stdout =====>] : 172.25.16.34
	
	I0317 11:11:20.653806    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:11:20.654801    8508 sshutil.go:53] new ssh client: &{IP:172.25.16.34 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500\id_rsa Username:docker}
	I0317 11:11:20.763620    8508 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0317 11:11:20.775877    8508 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0317 11:11:20.811566    8508 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0317 11:11:20.819881    8508 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0317 11:11:20.848832    8508 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0317 11:11:20.856700    8508 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0317 11:11:20.889393    8508 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0317 11:11:20.899897    8508 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0317 11:11:20.937787    8508 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0317 11:11:20.943715    8508 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0317 11:11:20.978499    8508 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0317 11:11:20.987288    8508 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0317 11:11:21.007129    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0317 11:11:21.059138    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0317 11:11:21.114696    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0317 11:11:21.162731    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0317 11:11:21.223677    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0317 11:11:21.277851    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0317 11:11:21.329299    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0317 11:11:21.378831    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0317 11:11:21.424582    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0317 11:11:21.473365    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8940.pem --> /usr/share/ca-certificates/8940.pem (1338 bytes)
	I0317 11:11:21.522141    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem --> /usr/share/ca-certificates/89402.pem (1708 bytes)
	I0317 11:11:21.572096    8508 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0317 11:11:21.605651    8508 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0317 11:11:21.638656    8508 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0317 11:11:21.672872    8508 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0317 11:11:21.707341    8508 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0317 11:11:21.739400    8508 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0317 11:11:21.768609    8508 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0317 11:11:21.816789    8508 ssh_runner.go:195] Run: openssl version
	I0317 11:11:21.836412    8508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0317 11:11:21.866874    8508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0317 11:11:21.873633    8508 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 17 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0317 11:11:21.885249    8508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0317 11:11:21.904629    8508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0317 11:11:21.934812    8508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8940.pem && ln -fs /usr/share/ca-certificates/8940.pem /etc/ssl/certs/8940.pem"
	I0317 11:11:21.964783    8508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8940.pem
	I0317 11:11:21.972542    8508 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 17 10:46 /usr/share/ca-certificates/8940.pem
	I0317 11:11:21.983264    8508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8940.pem
	I0317 11:11:22.003704    8508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8940.pem /etc/ssl/certs/51391683.0"
	I0317 11:11:22.037715    8508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/89402.pem && ln -fs /usr/share/ca-certificates/89402.pem /etc/ssl/certs/89402.pem"
	I0317 11:11:22.069329    8508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/89402.pem
	I0317 11:11:22.075724    8508 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 17 10:46 /usr/share/ca-certificates/89402.pem
	I0317 11:11:22.086532    8508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/89402.pem
	I0317 11:11:22.106439    8508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/89402.pem /etc/ssl/certs/3ec20f2e.0"
	I0317 11:11:22.136854    8508 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0317 11:11:22.143232    8508 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0317 11:11:22.143699    8508 kubeadm.go:934] updating node {m02 172.25.21.189 8443 v1.32.2 docker true true} ...
	I0317 11:11:22.143926    8508 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-450500-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.21.189
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:ha-450500 Namespace:default APIServerHAVIP:172.25.31.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0317 11:11:22.143978    8508 kube-vip.go:115] generating kube-vip config ...
	I0317 11:11:22.155614    8508 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0317 11:11:22.187166    8508 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0317 11:11:22.187373    8508 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.25.31.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.9
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0317 11:11:22.199788    8508 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0317 11:11:22.218340    8508 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.32.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.32.2': No such file or directory
	
	Initiating transfer...
	I0317 11:11:22.230705    8508 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.32.2
	I0317 11:11:22.256199    8508 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubectl
	I0317 11:11:22.256266    8508 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubeadm
	I0317 11:11:22.256373    8508 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubelet
	I0317 11:11:23.751426    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubectl -> /var/lib/minikube/binaries/v1.32.2/kubectl
	I0317 11:11:23.760500    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubeadm -> /var/lib/minikube/binaries/v1.32.2/kubeadm
	I0317 11:11:23.764128    8508 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl
	I0317 11:11:23.771177    8508 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubeadm
	I0317 11:11:23.772199    8508 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubectl': No such file or directory
	I0317 11:11:23.772199    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubectl --> /var/lib/minikube/binaries/v1.32.2/kubectl (57323672 bytes)
	I0317 11:11:23.786952    8508 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubeadm': No such file or directory
	I0317 11:11:23.787255    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubeadm --> /var/lib/minikube/binaries/v1.32.2/kubeadm (70942872 bytes)
	I0317 11:11:24.056023    8508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0317 11:11:24.100011    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubelet -> /var/lib/minikube/binaries/v1.32.2/kubelet
	I0317 11:11:24.111597    8508 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet
	I0317 11:11:24.139642    8508 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubelet': No such file or directory
	I0317 11:11:24.139642    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubelet --> /var/lib/minikube/binaries/v1.32.2/kubelet (77406468 bytes)
	I0317 11:11:25.038725    8508 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0317 11:11:25.059333    8508 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0317 11:11:25.090293    8508 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0317 11:11:25.125828    8508 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0317 11:11:25.172997    8508 ssh_runner.go:195] Run: grep 172.25.31.254	control-plane.minikube.internal$ /etc/hosts
	I0317 11:11:25.180226    8508 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.25.31.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 11:11:25.216634    8508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 11:11:25.430747    8508 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 11:11:25.462858    8508 host.go:66] Checking if "ha-450500" exists ...
	I0317 11:11:25.463686    8508 start.go:317] joinCluster: &{Name:ha-450500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-450500 Namespace:def
ault APIServerHAVIP:172.25.31.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.16.34 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.25.21.189 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false M
ountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 11:11:25.463686    8508 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0317 11:11:25.463686    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500 ).state
	I0317 11:11:27.640742    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:11:27.641799    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:11:27.641850    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500 ).networkadapters[0]).ipaddresses[0]
	I0317 11:11:30.303419    8508 main.go:141] libmachine: [stdout =====>] : 172.25.16.34
	
	I0317 11:11:30.303419    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:11:30.304323    8508 sshutil.go:53] new ssh client: &{IP:172.25.16.34 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500\id_rsa Username:docker}
	I0317 11:11:30.822626    8508 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm token create --print-join-command --ttl=0": (5.3588263s)
	I0317 11:11:30.822682    8508 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.25.21.189 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0317 11:11:30.822807    8508 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vn1ehv.5h9d51qftui03qsu --discovery-token-ca-cert-hash sha256:c322b0259bb8a6b4c6c1dc77ade13bbf0d2f6b9bd2605c58fcd3743199330256 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-450500-m02 --control-plane --apiserver-advertise-address=172.25.21.189 --apiserver-bind-port=8443"
	I0317 11:12:10.937739    8508 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vn1ehv.5h9d51qftui03qsu --discovery-token-ca-cert-hash sha256:c322b0259bb8a6b4c6c1dc77ade13bbf0d2f6b9bd2605c58fcd3743199330256 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-450500-m02 --control-plane --apiserver-advertise-address=172.25.21.189 --apiserver-bind-port=8443": (40.1146327s)
	I0317 11:12:10.937739    8508 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0317 11:12:11.686764    8508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-450500-m02 minikube.k8s.io/updated_at=2025_03_17T11_12_11_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=28b3ce799b018a38b7c40f89b465976263272e76 minikube.k8s.io/name=ha-450500 minikube.k8s.io/primary=false
	I0317 11:12:11.909051    8508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-450500-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0317 11:12:12.107018    8508 start.go:319] duration metric: took 46.6429838s to joinCluster
	I0317 11:12:12.107252    8508 start.go:235] Will wait 6m0s for node &{Name:m02 IP:172.25.21.189 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0317 11:12:12.107936    8508 config.go:182] Loaded profile config "ha-450500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0317 11:12:12.111193    8508 out.go:177] * Verifying Kubernetes components...
	I0317 11:12:12.127426    8508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 11:12:12.513757    8508 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 11:12:12.552255    8508 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0317 11:12:12.552786    8508 kapi.go:59] client config for ha-450500: &rest.Config{Host:"https://172.25.31.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-450500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-450500\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2e292e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0317 11:12:12.552786    8508 kubeadm.go:483] Overriding stale ClientConfig host https://172.25.31.254:8443 with https://172.25.16.34:8443
	I0317 11:12:12.553861    8508 node_ready.go:35] waiting up to 6m0s for node "ha-450500-m02" to be "Ready" ...
	I0317 11:12:12.554073    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:12.554120    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:12.554120    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:12.554120    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:12.574739    8508 round_trippers.go:581] Response Status: 200 OK in 20 milliseconds
	I0317 11:12:13.055100    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:13.055100    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:13.055100    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:13.055100    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:13.063791    8508 round_trippers.go:581] Response Status: 200 OK in 8 milliseconds
	I0317 11:12:13.554915    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:13.554915    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:13.554915    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:13.554915    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:13.563528    8508 round_trippers.go:581] Response Status: 200 OK in 8 milliseconds
	I0317 11:12:14.054350    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:14.054350    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:14.054350    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:14.054350    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:14.060569    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:12:14.555403    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:14.555403    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:14.555525    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:14.555525    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:14.561945    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:12:14.562472    8508 node_ready.go:53] node "ha-450500-m02" has status "Ready":"False"
	I0317 11:12:15.054522    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:15.054769    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:15.054769    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:15.054828    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:15.061105    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:12:15.554963    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:15.554963    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:15.554963    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:15.554963    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:15.560978    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:12:16.056259    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:16.056259    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:16.056259    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:16.056259    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:16.064445    8508 round_trippers.go:581] Response Status: 200 OK in 8 milliseconds
	I0317 11:12:16.554996    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:16.554996    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:16.554996    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:16.554996    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:16.562633    8508 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0317 11:12:16.562766    8508 node_ready.go:53] node "ha-450500-m02" has status "Ready":"False"
	I0317 11:12:17.055145    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:17.055214    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:17.055307    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:17.055307    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:17.472772    8508 round_trippers.go:581] Response Status: 200 OK in 417 milliseconds
	I0317 11:12:17.554591    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:17.554591    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:17.554591    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:17.554591    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:17.563882    8508 round_trippers.go:581] Response Status: 200 OK in 9 milliseconds
	I0317 11:12:18.054353    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:18.054353    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:18.054353    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:18.054353    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:18.059804    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:12:18.554990    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:18.554990    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:18.554990    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:18.555060    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:18.560461    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:12:19.055203    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:19.055265    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:19.055265    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:19.055265    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:19.063703    8508 round_trippers.go:581] Response Status: 200 OK in 8 milliseconds
	I0317 11:12:19.064111    8508 node_ready.go:53] node "ha-450500-m02" has status "Ready":"False"
	I0317 11:12:19.555260    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:19.555260    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:19.555260    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:19.555260    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:19.567376    8508 round_trippers.go:581] Response Status: 200 OK in 12 milliseconds
	I0317 11:12:20.054206    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:20.054206    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:20.054206    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:20.054206    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:20.069649    8508 round_trippers.go:581] Response Status: 200 OK in 15 milliseconds
	I0317 11:12:20.555136    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:20.555136    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:20.555136    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:20.555235    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:20.559309    8508 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 11:12:21.055035    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:21.055035    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:21.055035    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:21.055035    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:21.061059    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:12:21.554986    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:21.554986    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:21.554986    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:21.554986    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:21.561654    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:12:21.561654    8508 node_ready.go:53] node "ha-450500-m02" has status "Ready":"False"
	I0317 11:12:22.055151    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:22.055151    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:22.055151    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:22.055151    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:22.061716    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:12:22.554919    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:22.554919    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:22.554919    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:22.554919    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:22.561786    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:12:23.055543    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:23.055621    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:23.055686    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:23.055686    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:23.060939    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:12:23.554577    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:23.554577    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:23.554577    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:23.554577    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:23.559823    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:12:24.055284    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:24.055284    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:24.055284    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:24.055284    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:24.061548    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:12:24.061973    8508 node_ready.go:53] node "ha-450500-m02" has status "Ready":"False"
	I0317 11:12:24.554517    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:24.554603    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:24.554603    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:24.554721    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:24.561374    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:12:25.056407    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:25.056475    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:25.056475    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:25.056475    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:25.061740    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:12:25.555351    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:25.555408    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:25.555463    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:25.555463    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:25.569896    8508 round_trippers.go:581] Response Status: 200 OK in 14 milliseconds
	I0317 11:12:26.054220    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:26.054220    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:26.054220    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:26.054220    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:26.059725    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:12:26.554488    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:26.554488    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:26.554488    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:26.554488    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:26.560770    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:12:26.560770    8508 node_ready.go:53] node "ha-450500-m02" has status "Ready":"False"
	I0317 11:12:27.054561    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:27.054561    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:27.054561    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:27.054561    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:27.063034    8508 round_trippers.go:581] Response Status: 200 OK in 8 milliseconds
	I0317 11:12:27.555207    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:27.555207    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:27.555207    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:27.555324    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:27.561216    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:12:28.054424    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:28.054424    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:28.054424    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:28.054424    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:28.064498    8508 round_trippers.go:581] Response Status: 200 OK in 10 milliseconds
	I0317 11:12:28.555572    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:28.555572    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:28.555572    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:28.555664    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:28.560500    8508 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 11:12:28.561108    8508 node_ready.go:53] node "ha-450500-m02" has status "Ready":"False"
	I0317 11:12:29.054141    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:29.054141    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:29.054141    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:29.054141    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:29.060995    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:12:29.555097    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:29.555097    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:29.555097    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:29.555097    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:29.560589    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:12:30.055344    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:30.055344    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:30.055344    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:30.055344    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:30.060751    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:12:30.555767    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:30.555767    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:30.555767    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:30.555767    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:30.563343    8508 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0317 11:12:30.564064    8508 node_ready.go:53] node "ha-450500-m02" has status "Ready":"False"
	I0317 11:12:31.054839    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:31.054839    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:31.054839    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:31.054839    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:31.061108    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:12:31.554153    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:31.554153    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:31.554153    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:31.554153    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:31.560762    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:12:32.054846    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:32.054846    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:32.054846    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:32.054846    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:32.061166    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:12:32.554290    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:32.554290    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:32.554290    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:32.554290    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:32.559868    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:12:33.054691    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:33.054691    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:33.054885    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:33.054885    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:33.059436    8508 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 11:12:33.059436    8508 node_ready.go:53] node "ha-450500-m02" has status "Ready":"False"
	I0317 11:12:33.554330    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:33.554330    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:33.554330    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:33.554330    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:33.559415    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:12:33.559529    8508 node_ready.go:49] node "ha-450500-m02" has status "Ready":"True"
	I0317 11:12:33.559529    8508 node_ready.go:38] duration metric: took 21.0055102s for node "ha-450500-m02" to be "Ready" ...
	I0317 11:12:33.559529    8508 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 11:12:33.559529    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods
	I0317 11:12:33.560075    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:33.560075    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:33.560075    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:33.564508    8508 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 11:12:33.567369    8508 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-qd2nj" in "kube-system" namespace to be "Ready" ...
	I0317 11:12:33.567369    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-qd2nj
	I0317 11:12:33.567614    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:33.567614    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:33.567646    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:33.571921    8508 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 11:12:33.571921    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500
	I0317 11:12:33.571921    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:33.571921    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:33.571921    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:33.576801    8508 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 11:12:33.577092    8508 pod_ready.go:93] pod "coredns-668d6bf9bc-qd2nj" in "kube-system" namespace has status "Ready":"True"
	I0317 11:12:33.577092    8508 pod_ready.go:82] duration metric: took 9.723ms for pod "coredns-668d6bf9bc-qd2nj" in "kube-system" namespace to be "Ready" ...
	I0317 11:12:33.577092    8508 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-rhhkv" in "kube-system" namespace to be "Ready" ...
	I0317 11:12:33.577307    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-rhhkv
	I0317 11:12:33.577307    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:33.577307    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:33.577307    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:33.583431    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:12:33.583464    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500
	I0317 11:12:33.583464    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:33.583464    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:33.583464    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:33.587422    8508 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0317 11:12:33.588494    8508 pod_ready.go:93] pod "coredns-668d6bf9bc-rhhkv" in "kube-system" namespace has status "Ready":"True"
	I0317 11:12:33.588531    8508 pod_ready.go:82] duration metric: took 11.4387ms for pod "coredns-668d6bf9bc-rhhkv" in "kube-system" namespace to be "Ready" ...
	I0317 11:12:33.588531    8508 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-450500" in "kube-system" namespace to be "Ready" ...
	I0317 11:12:33.588712    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/etcd-ha-450500
	I0317 11:12:33.588712    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:33.588712    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:33.588712    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:33.592407    8508 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0317 11:12:33.592819    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500
	I0317 11:12:33.592848    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:33.592848    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:33.592848    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:33.597081    8508 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 11:12:33.597081    8508 pod_ready.go:93] pod "etcd-ha-450500" in "kube-system" namespace has status "Ready":"True"
	I0317 11:12:33.597081    8508 pod_ready.go:82] duration metric: took 8.5033ms for pod "etcd-ha-450500" in "kube-system" namespace to be "Ready" ...
	I0317 11:12:33.597081    8508 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-450500-m02" in "kube-system" namespace to be "Ready" ...
	I0317 11:12:33.597081    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/etcd-ha-450500-m02
	I0317 11:12:33.597081    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:33.597081    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:33.597081    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:33.601174    8508 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 11:12:33.601955    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:33.601955    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:33.601955    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:33.601955    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:33.605551    8508 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0317 11:12:33.605861    8508 pod_ready.go:93] pod "etcd-ha-450500-m02" in "kube-system" namespace has status "Ready":"True"
	I0317 11:12:33.605980    8508 pod_ready.go:82] duration metric: took 8.899ms for pod "etcd-ha-450500-m02" in "kube-system" namespace to be "Ready" ...
	I0317 11:12:33.605980    8508 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-450500" in "kube-system" namespace to be "Ready" ...
	I0317 11:12:33.754563    8508 request.go:661] Waited for 148.581ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-450500
	I0317 11:12:33.755133    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-450500
	I0317 11:12:33.755133    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:33.755133    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:33.755133    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:33.760942    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:12:33.955180    8508 request.go:661] Waited for 193.5967ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/nodes/ha-450500
	I0317 11:12:33.955180    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500
	I0317 11:12:33.955180    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:33.955180    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:33.955180    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:33.966174    8508 round_trippers.go:581] Response Status: 200 OK in 10 milliseconds
	I0317 11:12:33.966507    8508 pod_ready.go:93] pod "kube-apiserver-ha-450500" in "kube-system" namespace has status "Ready":"True"
	I0317 11:12:33.966507    8508 pod_ready.go:82] duration metric: took 360.5238ms for pod "kube-apiserver-ha-450500" in "kube-system" namespace to be "Ready" ...
	I0317 11:12:33.966507    8508 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-450500-m02" in "kube-system" namespace to be "Ready" ...
	I0317 11:12:34.154458    8508 request.go:661] Waited for 187.9492ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-450500-m02
	I0317 11:12:34.154458    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-450500-m02
	I0317 11:12:34.154458    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:34.154458    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:34.154458    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:34.161317    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:12:34.355330    8508 request.go:661] Waited for 193.5721ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:34.355330    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:34.355330    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:34.355330    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:34.355330    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:34.361875    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:12:34.362289    8508 pod_ready.go:93] pod "kube-apiserver-ha-450500-m02" in "kube-system" namespace has status "Ready":"True"
	I0317 11:12:34.362289    8508 pod_ready.go:82] duration metric: took 395.779ms for pod "kube-apiserver-ha-450500-m02" in "kube-system" namespace to be "Ready" ...
	I0317 11:12:34.362289    8508 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-450500" in "kube-system" namespace to be "Ready" ...
	I0317 11:12:34.554901    8508 request.go:661] Waited for 192.6106ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-450500
	I0317 11:12:34.555310    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-450500
	I0317 11:12:34.555310    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:34.555310    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:34.555310    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:34.559913    8508 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 11:12:34.754488    8508 request.go:661] Waited for 193.9322ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/nodes/ha-450500
	I0317 11:12:34.754488    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500
	I0317 11:12:34.754488    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:34.754488    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:34.754488    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:34.761347    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:12:34.761774    8508 pod_ready.go:93] pod "kube-controller-manager-ha-450500" in "kube-system" namespace has status "Ready":"True"
	I0317 11:12:34.761774    8508 pod_ready.go:82] duration metric: took 399.4824ms for pod "kube-controller-manager-ha-450500" in "kube-system" namespace to be "Ready" ...
	I0317 11:12:34.761834    8508 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-450500-m02" in "kube-system" namespace to be "Ready" ...
	I0317 11:12:34.954427    8508 request.go:661] Waited for 192.5119ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-450500-m02
	I0317 11:12:34.954427    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-450500-m02
	I0317 11:12:34.954427    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:34.954945    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:34.954945    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:34.960264    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:12:35.154734    8508 request.go:661] Waited for 193.9632ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:35.155257    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:35.155257    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:35.155257    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:35.155257    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:35.161400    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:12:35.162182    8508 pod_ready.go:93] pod "kube-controller-manager-ha-450500-m02" in "kube-system" namespace has status "Ready":"True"
	I0317 11:12:35.162303    8508 pod_ready.go:82] duration metric: took 400.4661ms for pod "kube-controller-manager-ha-450500-m02" in "kube-system" namespace to be "Ready" ...
	I0317 11:12:35.162303    8508 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fthkw" in "kube-system" namespace to be "Ready" ...
	I0317 11:12:35.354561    8508 request.go:661] Waited for 192.1187ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fthkw
	I0317 11:12:35.354561    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fthkw
	I0317 11:12:35.354561    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:35.354561    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:35.354561    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:35.360908    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:12:35.555745    8508 request.go:661] Waited for 194.6405ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:35.556296    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:35.556392    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:35.556392    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:35.556392    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:35.563530    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:12:35.563928    8508 pod_ready.go:93] pod "kube-proxy-fthkw" in "kube-system" namespace has status "Ready":"True"
	I0317 11:12:35.563928    8508 pod_ready.go:82] duration metric: took 401.6221ms for pod "kube-proxy-fthkw" in "kube-system" namespace to be "Ready" ...
	I0317 11:12:35.563928    8508 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-jzvxr" in "kube-system" namespace to be "Ready" ...
	I0317 11:12:35.754624    8508 request.go:661] Waited for 190.6943ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jzvxr
	I0317 11:12:35.755143    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jzvxr
	I0317 11:12:35.755143    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:35.755143    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:35.755286    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:35.764652    8508 round_trippers.go:581] Response Status: 200 OK in 9 milliseconds
	I0317 11:12:35.955484    8508 request.go:661] Waited for 190.8307ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/nodes/ha-450500
	I0317 11:12:35.955484    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500
	I0317 11:12:35.955484    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:35.955484    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:35.955484    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:35.961128    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:12:35.961561    8508 pod_ready.go:93] pod "kube-proxy-jzvxr" in "kube-system" namespace has status "Ready":"True"
	I0317 11:12:35.961561    8508 pod_ready.go:82] duration metric: took 397.6296ms for pod "kube-proxy-jzvxr" in "kube-system" namespace to be "Ready" ...
	I0317 11:12:35.961561    8508 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-450500" in "kube-system" namespace to be "Ready" ...
	I0317 11:12:36.155229    8508 request.go:661] Waited for 193.4257ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-450500
	I0317 11:12:36.155229    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-450500
	I0317 11:12:36.155876    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:36.155876    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:36.155919    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:36.161058    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:12:36.355653    8508 request.go:661] Waited for 194.2269ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/nodes/ha-450500
	I0317 11:12:36.355981    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500
	I0317 11:12:36.355981    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:36.356011    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:36.356011    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:36.361762    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:12:36.362194    8508 pod_ready.go:93] pod "kube-scheduler-ha-450500" in "kube-system" namespace has status "Ready":"True"
	I0317 11:12:36.362194    8508 pod_ready.go:82] duration metric: took 400.6297ms for pod "kube-scheduler-ha-450500" in "kube-system" namespace to be "Ready" ...
	I0317 11:12:36.362194    8508 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-450500-m02" in "kube-system" namespace to be "Ready" ...
	I0317 11:12:36.555115    8508 request.go:661] Waited for 192.7615ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-450500-m02
	I0317 11:12:36.555586    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-450500-m02
	I0317 11:12:36.555641    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:36.555641    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:36.555682    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:36.561551    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:12:36.755413    8508 request.go:661] Waited for 193.2923ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:36.755413    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:12:36.755413    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:36.755413    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:36.755413    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:36.764420    8508 round_trippers.go:581] Response Status: 200 OK in 9 milliseconds
	I0317 11:12:36.765314    8508 pod_ready.go:93] pod "kube-scheduler-ha-450500-m02" in "kube-system" namespace has status "Ready":"True"
	I0317 11:12:36.765314    8508 pod_ready.go:82] duration metric: took 403.1172ms for pod "kube-scheduler-ha-450500-m02" in "kube-system" namespace to be "Ready" ...
	I0317 11:12:36.765386    8508 pod_ready.go:39] duration metric: took 3.2058328s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 11:12:36.765386    8508 api_server.go:52] waiting for apiserver process to appear ...
	I0317 11:12:36.778349    8508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 11:12:36.810488    8508 api_server.go:72] duration metric: took 24.7029659s to wait for apiserver process to appear ...
	I0317 11:12:36.810581    8508 api_server.go:88] waiting for apiserver healthz status ...
	I0317 11:12:36.810581    8508 api_server.go:253] Checking apiserver healthz at https://172.25.16.34:8443/healthz ...
	I0317 11:12:36.826345    8508 api_server.go:279] https://172.25.16.34:8443/healthz returned 200:
	ok
	I0317 11:12:36.826548    8508 round_trippers.go:470] GET https://172.25.16.34:8443/version
	I0317 11:12:36.826548    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:36.826548    8508 round_trippers.go:480]     Accept: application/json, */*
	I0317 11:12:36.826548    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:36.828591    8508 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0317 11:12:36.828798    8508 api_server.go:141] control plane version: v1.32.2
	I0317 11:12:36.828798    8508 api_server.go:131] duration metric: took 18.2166ms to wait for apiserver health ...
	I0317 11:12:36.828896    8508 system_pods.go:43] waiting for kube-system pods to appear ...
	I0317 11:12:36.954895    8508 request.go:661] Waited for 125.882ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods
	I0317 11:12:36.955530    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods
	I0317 11:12:36.955530    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:36.955530    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:36.955530    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:36.962711    8508 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0317 11:12:36.965128    8508 system_pods.go:59] 17 kube-system pods found
	I0317 11:12:36.965189    8508 system_pods.go:61] "coredns-668d6bf9bc-qd2nj" [1f982191-c45a-4681-907d-a0d9220b1f77] Running
	I0317 11:12:36.965251    8508 system_pods.go:61] "coredns-668d6bf9bc-rhhkv" [0dc113a4-430f-4c5b-bc05-05d8cc014ed7] Running
	I0317 11:12:36.965251    8508 system_pods.go:61] "etcd-ha-450500" [7a735c5c-89ec-488d-95c8-f7fa1160fa3c] Running
	I0317 11:12:36.965251    8508 system_pods.go:61] "etcd-ha-450500-m02" [78234926-bd4c-41f7-9f48-43e2bcd543a4] Running
	I0317 11:12:36.965251    8508 system_pods.go:61] "kindnet-ch8f7" [6247c683-e723-4b72-b373-89cb4f1b576d] Running
	I0317 11:12:36.965251    8508 system_pods.go:61] "kindnet-prwhr" [0f7a825d-bd7c-4428-8685-cfa8926ef827] Running
	I0317 11:12:36.965251    8508 system_pods.go:61] "kube-apiserver-ha-450500" [232d0746-fa49-4d17-b36e-557164865a8f] Running
	I0317 11:12:36.965251    8508 system_pods.go:61] "kube-apiserver-ha-450500-m02" [a445c598-90ff-4e54-a96f-3d206a54a108] Running
	I0317 11:12:36.965314    8508 system_pods.go:61] "kube-controller-manager-ha-450500" [afff30e2-5638-4b0f-bbe0-ec65ff25eef4] Running
	I0317 11:12:36.965314    8508 system_pods.go:61] "kube-controller-manager-ha-450500-m02" [e931480a-7ff2-4264-bef5-ff129c603e77] Running
	I0317 11:12:36.965314    8508 system_pods.go:61] "kube-proxy-fthkw" [8a9b1cd2-2eb8-49ac-8cc5-df138d6d0670] Running
	I0317 11:12:36.965314    8508 system_pods.go:61] "kube-proxy-jzvxr" [eeae069e-8b5a-449e-9fe2-ecafcd9733eb] Running
	I0317 11:12:36.965314    8508 system_pods.go:61] "kube-scheduler-ha-450500" [66d11ee5-babc-401c-bf3f-8bd94eb09e06] Running
	I0317 11:12:36.965375    8508 system_pods.go:61] "kube-scheduler-ha-450500-m02" [053f7a43-bff9-4dc7-8d8d-126f265dfbed] Running
	I0317 11:12:36.965375    8508 system_pods.go:61] "kube-vip-ha-450500" [55e8c247-e66e-47b2-b766-ead538ec0b9a] Running
	I0317 11:12:36.965375    8508 system_pods.go:61] "kube-vip-ha-450500-m02" [df1997ea-6b13-4caa-a041-2c63da793276] Running
	I0317 11:12:36.965414    8508 system_pods.go:61] "storage-provisioner" [b1a4725e-cedf-428a-a320-59d7374cba0d] Running
	I0317 11:12:36.965414    8508 system_pods.go:74] duration metric: took 136.5164ms to wait for pod list to return data ...
	I0317 11:12:36.965414    8508 default_sa.go:34] waiting for default service account to be created ...
	I0317 11:12:37.154827    8508 request.go:661] Waited for 189.2947ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/namespaces/default/serviceaccounts
	I0317 11:12:37.154827    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/default/serviceaccounts
	I0317 11:12:37.154827    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:37.154827    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:37.154827    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:37.164412    8508 round_trippers.go:581] Response Status: 200 OK in 9 milliseconds
	I0317 11:12:37.164577    8508 default_sa.go:45] found service account: "default"
	I0317 11:12:37.164577    8508 default_sa.go:55] duration metric: took 199.1615ms for default service account to be created ...
	I0317 11:12:37.164577    8508 system_pods.go:116] waiting for k8s-apps to be running ...
	I0317 11:12:37.355377    8508 request.go:661] Waited for 190.7988ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods
	I0317 11:12:37.355377    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods
	I0317 11:12:37.355377    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:37.355377    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:37.355377    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:37.362316    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:12:37.364647    8508 system_pods.go:86] 17 kube-system pods found
	I0317 11:12:37.364647    8508 system_pods.go:89] "coredns-668d6bf9bc-qd2nj" [1f982191-c45a-4681-907d-a0d9220b1f77] Running
	I0317 11:12:37.364647    8508 system_pods.go:89] "coredns-668d6bf9bc-rhhkv" [0dc113a4-430f-4c5b-bc05-05d8cc014ed7] Running
	I0317 11:12:37.364647    8508 system_pods.go:89] "etcd-ha-450500" [7a735c5c-89ec-488d-95c8-f7fa1160fa3c] Running
	I0317 11:12:37.364647    8508 system_pods.go:89] "etcd-ha-450500-m02" [78234926-bd4c-41f7-9f48-43e2bcd543a4] Running
	I0317 11:12:37.364647    8508 system_pods.go:89] "kindnet-ch8f7" [6247c683-e723-4b72-b373-89cb4f1b576d] Running
	I0317 11:12:37.364647    8508 system_pods.go:89] "kindnet-prwhr" [0f7a825d-bd7c-4428-8685-cfa8926ef827] Running
	I0317 11:12:37.364647    8508 system_pods.go:89] "kube-apiserver-ha-450500" [232d0746-fa49-4d17-b36e-557164865a8f] Running
	I0317 11:12:37.364647    8508 system_pods.go:89] "kube-apiserver-ha-450500-m02" [a445c598-90ff-4e54-a96f-3d206a54a108] Running
	I0317 11:12:37.364647    8508 system_pods.go:89] "kube-controller-manager-ha-450500" [afff30e2-5638-4b0f-bbe0-ec65ff25eef4] Running
	I0317 11:12:37.364647    8508 system_pods.go:89] "kube-controller-manager-ha-450500-m02" [e931480a-7ff2-4264-bef5-ff129c603e77] Running
	I0317 11:12:37.364647    8508 system_pods.go:89] "kube-proxy-fthkw" [8a9b1cd2-2eb8-49ac-8cc5-df138d6d0670] Running
	I0317 11:12:37.364647    8508 system_pods.go:89] "kube-proxy-jzvxr" [eeae069e-8b5a-449e-9fe2-ecafcd9733eb] Running
	I0317 11:12:37.364647    8508 system_pods.go:89] "kube-scheduler-ha-450500" [66d11ee5-babc-401c-bf3f-8bd94eb09e06] Running
	I0317 11:12:37.364647    8508 system_pods.go:89] "kube-scheduler-ha-450500-m02" [053f7a43-bff9-4dc7-8d8d-126f265dfbed] Running
	I0317 11:12:37.364647    8508 system_pods.go:89] "kube-vip-ha-450500" [55e8c247-e66e-47b2-b766-ead538ec0b9a] Running
	I0317 11:12:37.364647    8508 system_pods.go:89] "kube-vip-ha-450500-m02" [df1997ea-6b13-4caa-a041-2c63da793276] Running
	I0317 11:12:37.365246    8508 system_pods.go:89] "storage-provisioner" [b1a4725e-cedf-428a-a320-59d7374cba0d] Running
	I0317 11:12:37.365246    8508 system_pods.go:126] duration metric: took 200.668ms to wait for k8s-apps to be running ...
	I0317 11:12:37.365292    8508 system_svc.go:44] waiting for kubelet service to be running ....
	I0317 11:12:37.375568    8508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0317 11:12:37.401869    8508 system_svc.go:56] duration metric: took 36.5231ms WaitForService to wait for kubelet
	I0317 11:12:37.401869    8508 kubeadm.go:582] duration metric: took 25.2943424s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0317 11:12:37.401935    8508 node_conditions.go:102] verifying NodePressure condition ...
	I0317 11:12:37.554482    8508 request.go:661] Waited for 152.4698ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/nodes
	I0317 11:12:37.554482    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes
	I0317 11:12:37.554950    8508 round_trippers.go:476] Request Headers:
	I0317 11:12:37.554950    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:12:37.554950    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:12:37.566997    8508 round_trippers.go:581] Response Status: 200 OK in 11 milliseconds
	I0317 11:12:37.567691    8508 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0317 11:12:37.567691    8508 node_conditions.go:123] node cpu capacity is 2
	I0317 11:12:37.567776    8508 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0317 11:12:37.567776    8508 node_conditions.go:123] node cpu capacity is 2
	I0317 11:12:37.567776    8508 node_conditions.go:105] duration metric: took 165.8395ms to run NodePressure ...
	I0317 11:12:37.567776    8508 start.go:241] waiting for startup goroutines ...
	I0317 11:12:37.567992    8508 start.go:255] writing updated cluster config ...
	I0317 11:12:37.572518    8508 out.go:201] 
	I0317 11:12:37.591469    8508 config.go:182] Loaded profile config "ha-450500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0317 11:12:37.592426    8508 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\config.json ...
	I0317 11:12:37.598440    8508 out.go:177] * Starting "ha-450500-m03" control-plane node in "ha-450500" cluster
	I0317 11:12:37.602446    8508 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0317 11:12:37.602446    8508 cache.go:56] Caching tarball of preloaded images
	I0317 11:12:37.602760    8508 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0317 11:12:37.602760    8508 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on docker
	I0317 11:12:37.603382    8508 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\config.json ...
	I0317 11:12:37.610585    8508 start.go:360] acquireMachinesLock for ha-450500-m03: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0317 11:12:37.611355    8508 start.go:364] duration metric: took 673.4µs to acquireMachinesLock for "ha-450500-m03"
	I0317 11:12:37.611423    8508 start.go:93] Provisioning new machine with config: &{Name:ha-450500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName
:ha-450500 Namespace:default APIServerHAVIP:172.25.31.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.16.34 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.25.21.189 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false insp
ektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0317 11:12:37.611423    8508 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0317 11:12:37.615250    8508 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0317 11:12:37.615250    8508 start.go:159] libmachine.API.Create for "ha-450500" (driver="hyperv")
	I0317 11:12:37.615250    8508 client.go:168] LocalClient.Create starting
	I0317 11:12:37.616765    8508 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0317 11:12:37.617305    8508 main.go:141] libmachine: Decoding PEM data...
	I0317 11:12:37.617698    8508 main.go:141] libmachine: Parsing certificate...
	I0317 11:12:37.618070    8508 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0317 11:12:37.618476    8508 main.go:141] libmachine: Decoding PEM data...
	I0317 11:12:37.618476    8508 main.go:141] libmachine: Parsing certificate...
	I0317 11:12:37.619108    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0317 11:12:39.643873    8508 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0317 11:12:39.643873    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:12:39.644116    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0317 11:12:41.449478    8508 main.go:141] libmachine: [stdout =====>] : False
	
	I0317 11:12:41.449478    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:12:41.449590    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0317 11:12:42.979010    8508 main.go:141] libmachine: [stdout =====>] : True
	
	I0317 11:12:42.979010    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:12:42.979157    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0317 11:12:46.759725    8508 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0317 11:12:46.759997    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:12:46.762290    8508 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0317 11:12:47.243667    8508 main.go:141] libmachine: Creating SSH key...
	I0317 11:12:47.590392    8508 main.go:141] libmachine: Creating VM...
	I0317 11:12:47.590392    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0317 11:12:50.575729    8508 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0317 11:12:50.575729    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:12:50.576567    8508 main.go:141] libmachine: Using switch "Default Switch"
	I0317 11:12:50.576648    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0317 11:12:52.406332    8508 main.go:141] libmachine: [stdout =====>] : True
	
	I0317 11:12:52.406332    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:12:52.406985    8508 main.go:141] libmachine: Creating VHD
	I0317 11:12:52.406985    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0317 11:12:56.278046    8508 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 87EC2771-5F2A-4102-A38E-D9D489CF70CB
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0317 11:12:56.278216    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:12:56.278216    8508 main.go:141] libmachine: Writing magic tar header
	I0317 11:12:56.278303    8508 main.go:141] libmachine: Writing SSH key tar header
	I0317 11:12:56.291816    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0317 11:12:59.532486    8508 main.go:141] libmachine: [stdout =====>] : 
	I0317 11:12:59.532819    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:12:59.532819    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500-m03\disk.vhd' -SizeBytes 20000MB
	I0317 11:13:02.156812    8508 main.go:141] libmachine: [stdout =====>] : 
	I0317 11:13:02.156812    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:13:02.157608    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-450500-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0317 11:13:05.849485    8508 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-450500-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0317 11:13:05.849485    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:13:05.849485    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-450500-m03 -DynamicMemoryEnabled $false
	I0317 11:13:08.201596    8508 main.go:141] libmachine: [stdout =====>] : 
	I0317 11:13:08.201596    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:13:08.201843    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-450500-m03 -Count 2
	I0317 11:13:10.450951    8508 main.go:141] libmachine: [stdout =====>] : 
	I0317 11:13:10.450951    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:13:10.451635    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-450500-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500-m03\boot2docker.iso'
	I0317 11:13:13.095724    8508 main.go:141] libmachine: [stdout =====>] : 
	I0317 11:13:13.095724    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:13:13.096027    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-450500-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500-m03\disk.vhd'
	I0317 11:13:15.777774    8508 main.go:141] libmachine: [stdout =====>] : 
	I0317 11:13:15.778509    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:13:15.778509    8508 main.go:141] libmachine: Starting VM...
	I0317 11:13:15.778732    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-450500-m03
	I0317 11:13:18.895602    8508 main.go:141] libmachine: [stdout =====>] : 
	I0317 11:13:18.895602    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:13:18.895602    8508 main.go:141] libmachine: Waiting for host to start...
	I0317 11:13:18.895602    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m03 ).state
	I0317 11:13:21.222654    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:13:21.222654    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:13:21.222654    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m03 ).networkadapters[0]).ipaddresses[0]
	I0317 11:13:23.777334    8508 main.go:141] libmachine: [stdout =====>] : 
	I0317 11:13:23.777334    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:13:24.778323    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m03 ).state
	I0317 11:13:27.105716    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:13:27.106610    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:13:27.106810    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m03 ).networkadapters[0]).ipaddresses[0]
	I0317 11:13:29.729891    8508 main.go:141] libmachine: [stdout =====>] : 
	I0317 11:13:29.729891    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:13:30.730598    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m03 ).state
	I0317 11:13:32.974753    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:13:32.974753    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:13:32.975022    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m03 ).networkadapters[0]).ipaddresses[0]
	I0317 11:13:35.533118    8508 main.go:141] libmachine: [stdout =====>] : 
	I0317 11:13:35.533118    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:13:36.533356    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m03 ).state
	I0317 11:13:38.844572    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:13:38.844572    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:13:38.845404    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m03 ).networkadapters[0]).ipaddresses[0]
	I0317 11:13:41.434725    8508 main.go:141] libmachine: [stdout =====>] : 
	I0317 11:13:41.435525    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:13:42.436521    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m03 ).state
	I0317 11:13:44.685204    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:13:44.685204    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:13:44.685204    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m03 ).networkadapters[0]).ipaddresses[0]
	I0317 11:13:47.319208    8508 main.go:141] libmachine: [stdout =====>] : 172.25.19.102
	
	I0317 11:13:47.319208    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:13:47.319409    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m03 ).state
	I0317 11:13:49.469774    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:13:49.469774    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:13:49.469774    8508 machine.go:93] provisionDockerMachine start ...
	I0317 11:13:49.469774    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m03 ).state
	I0317 11:13:51.692302    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:13:51.693264    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:13:51.693264    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m03 ).networkadapters[0]).ipaddresses[0]
	I0317 11:13:54.299861    8508 main.go:141] libmachine: [stdout =====>] : 172.25.19.102
	
	I0317 11:13:54.299861    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:13:54.306223    8508 main.go:141] libmachine: Using SSH client type: native
	I0317 11:13:54.306337    8508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.19.102 22 <nil> <nil>}
	I0317 11:13:54.306337    8508 main.go:141] libmachine: About to run SSH command:
	hostname
	I0317 11:13:54.437799    8508 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0317 11:13:54.437799    8508 buildroot.go:166] provisioning hostname "ha-450500-m03"
	I0317 11:13:54.438341    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m03 ).state
	I0317 11:13:56.609069    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:13:56.609069    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:13:56.609069    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m03 ).networkadapters[0]).ipaddresses[0]
	I0317 11:13:59.186271    8508 main.go:141] libmachine: [stdout =====>] : 172.25.19.102
	
	I0317 11:13:59.187118    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:13:59.192671    8508 main.go:141] libmachine: Using SSH client type: native
	I0317 11:13:59.193442    8508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.19.102 22 <nil> <nil>}
	I0317 11:13:59.193442    8508 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-450500-m03 && echo "ha-450500-m03" | sudo tee /etc/hostname
	I0317 11:13:59.349997    8508 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-450500-m03
	
	I0317 11:13:59.349997    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m03 ).state
	I0317 11:14:01.559393    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:14:01.559393    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:14:01.559393    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m03 ).networkadapters[0]).ipaddresses[0]
	I0317 11:14:04.193668    8508 main.go:141] libmachine: [stdout =====>] : 172.25.19.102
	
	I0317 11:14:04.194703    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:14:04.200593    8508 main.go:141] libmachine: Using SSH client type: native
	I0317 11:14:04.201252    8508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.19.102 22 <nil> <nil>}
	I0317 11:14:04.201252    8508 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-450500-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-450500-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-450500-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0317 11:14:04.345564    8508 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0317 11:14:04.345564    8508 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0317 11:14:04.345564    8508 buildroot.go:174] setting up certificates
	I0317 11:14:04.345564    8508 provision.go:84] configureAuth start
	I0317 11:14:04.346102    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m03 ).state
	I0317 11:14:06.604863    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:14:06.605362    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:14:06.605362    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m03 ).networkadapters[0]).ipaddresses[0]
	I0317 11:14:09.195920    8508 main.go:141] libmachine: [stdout =====>] : 172.25.19.102
	
	I0317 11:14:09.195920    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:14:09.195920    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m03 ).state
	I0317 11:14:11.393420    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:14:11.393420    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:14:11.393420    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m03 ).networkadapters[0]).ipaddresses[0]
	I0317 11:14:13.971892    8508 main.go:141] libmachine: [stdout =====>] : 172.25.19.102
	
	I0317 11:14:13.972826    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:14:13.972909    8508 provision.go:143] copyHostCerts
	I0317 11:14:13.973126    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0317 11:14:13.973464    8508 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0317 11:14:13.973528    8508 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0317 11:14:13.973633    8508 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0317 11:14:13.974529    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0317 11:14:13.975057    8508 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0317 11:14:13.975057    8508 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0317 11:14:13.975256    8508 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0317 11:14:13.976095    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0317 11:14:13.976095    8508 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0317 11:14:13.976095    8508 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0317 11:14:13.976785    8508 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0317 11:14:13.977584    8508 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-450500-m03 san=[127.0.0.1 172.25.19.102 ha-450500-m03 localhost minikube]
	I0317 11:14:14.393909    8508 provision.go:177] copyRemoteCerts
	I0317 11:14:14.406073    8508 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0317 11:14:14.406147    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m03 ).state
	I0317 11:14:16.572409    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:14:16.572575    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:14:16.572686    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m03 ).networkadapters[0]).ipaddresses[0]
	I0317 11:14:19.184285    8508 main.go:141] libmachine: [stdout =====>] : 172.25.19.102
	
	I0317 11:14:19.184285    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:14:19.185227    8508 sshutil.go:53] new ssh client: &{IP:172.25.19.102 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500-m03\id_rsa Username:docker}
	I0317 11:14:19.290840    8508 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8847299s)
	I0317 11:14:19.290840    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0317 11:14:19.290840    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0317 11:14:19.336651    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0317 11:14:19.337244    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0317 11:14:19.382465    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0317 11:14:19.382938    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0317 11:14:19.427140    8508 provision.go:87] duration metric: took 15.0814127s to configureAuth
	I0317 11:14:19.427219    8508 buildroot.go:189] setting minikube options for container-runtime
	I0317 11:14:19.427871    8508 config.go:182] Loaded profile config "ha-450500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0317 11:14:19.427871    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m03 ).state
	I0317 11:14:21.602097    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:14:21.602097    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:14:21.602748    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m03 ).networkadapters[0]).ipaddresses[0]
	I0317 11:14:24.225066    8508 main.go:141] libmachine: [stdout =====>] : 172.25.19.102
	
	I0317 11:14:24.225533    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:14:24.232188    8508 main.go:141] libmachine: Using SSH client type: native
	I0317 11:14:24.232849    8508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.19.102 22 <nil> <nil>}
	I0317 11:14:24.232849    8508 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0317 11:14:24.374274    8508 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0317 11:14:24.374274    8508 buildroot.go:70] root file system type: tmpfs
	I0317 11:14:24.374274    8508 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0317 11:14:24.374996    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m03 ).state
	I0317 11:14:26.512583    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:14:26.512583    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:14:26.513244    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m03 ).networkadapters[0]).ipaddresses[0]
	I0317 11:14:29.112887    8508 main.go:141] libmachine: [stdout =====>] : 172.25.19.102
	
	I0317 11:14:29.112887    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:14:29.119456    8508 main.go:141] libmachine: Using SSH client type: native
	I0317 11:14:29.120304    8508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.19.102 22 <nil> <nil>}
	I0317 11:14:29.120593    8508 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.25.16.34"
	Environment="NO_PROXY=172.25.16.34,172.25.21.189"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0317 11:14:29.288750    8508 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.25.16.34
	Environment=NO_PROXY=172.25.16.34,172.25.21.189
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0317 11:14:29.288750    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m03 ).state
	I0317 11:14:31.483668    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:14:31.483668    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:14:31.483668    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m03 ).networkadapters[0]).ipaddresses[0]
	I0317 11:14:34.092330    8508 main.go:141] libmachine: [stdout =====>] : 172.25.19.102
	
	I0317 11:14:34.092330    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:14:34.098474    8508 main.go:141] libmachine: Using SSH client type: native
	I0317 11:14:34.098474    8508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.19.102 22 <nil> <nil>}
	I0317 11:14:34.099000    8508 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0317 11:14:36.347713    8508 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0317 11:14:36.347713    8508 machine.go:96] duration metric: took 46.8775827s to provisionDockerMachine
	I0317 11:14:36.347713    8508 client.go:171] duration metric: took 1m58.7315615s to LocalClient.Create
	I0317 11:14:36.347713    8508 start.go:167] duration metric: took 1m58.7315615s to libmachine.API.Create "ha-450500"
	I0317 11:14:36.347713    8508 start.go:293] postStartSetup for "ha-450500-m03" (driver="hyperv")
	I0317 11:14:36.347713    8508 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0317 11:14:36.359713    8508 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0317 11:14:36.359713    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m03 ).state
	I0317 11:14:38.594378    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:14:38.595363    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:14:38.595437    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m03 ).networkadapters[0]).ipaddresses[0]
	I0317 11:14:41.168190    8508 main.go:141] libmachine: [stdout =====>] : 172.25.19.102
	
	I0317 11:14:41.168283    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:14:41.168448    8508 sshutil.go:53] new ssh client: &{IP:172.25.19.102 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500-m03\id_rsa Username:docker}
	I0317 11:14:41.274774    8508 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9150236s)
	I0317 11:14:41.287102    8508 ssh_runner.go:195] Run: cat /etc/os-release
	I0317 11:14:41.295877    8508 info.go:137] Remote host: Buildroot 2023.02.9
	I0317 11:14:41.295877    8508 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0317 11:14:41.296515    8508 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0317 11:14:41.297191    8508 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem -> 89402.pem in /etc/ssl/certs
	I0317 11:14:41.297191    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem -> /etc/ssl/certs/89402.pem
	I0317 11:14:41.308357    8508 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0317 11:14:41.329650    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem --> /etc/ssl/certs/89402.pem (1708 bytes)
	I0317 11:14:41.377920    8508 start.go:296] duration metric: took 5.0301692s for postStartSetup
	I0317 11:14:41.380919    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m03 ).state
	I0317 11:14:43.562760    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:14:43.563755    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:14:43.563755    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m03 ).networkadapters[0]).ipaddresses[0]
	I0317 11:14:46.135072    8508 main.go:141] libmachine: [stdout =====>] : 172.25.19.102
	
	I0317 11:14:46.135625    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:14:46.135625    8508 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\config.json ...
	I0317 11:14:46.138914    8508 start.go:128] duration metric: took 2m8.5265152s to createHost
	I0317 11:14:46.139700    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m03 ).state
	I0317 11:14:48.418326    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:14:48.418326    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:14:48.418326    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m03 ).networkadapters[0]).ipaddresses[0]
	I0317 11:14:50.993580    8508 main.go:141] libmachine: [stdout =====>] : 172.25.19.102
	
	I0317 11:14:50.993580    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:14:50.999925    8508 main.go:141] libmachine: Using SSH client type: native
	I0317 11:14:51.000480    8508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.19.102 22 <nil> <nil>}
	I0317 11:14:51.000536    8508 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0317 11:14:51.133447    8508 main.go:141] libmachine: SSH cmd err, output: <nil>: 1742210091.155605387
	
	I0317 11:14:51.133447    8508 fix.go:216] guest clock: 1742210091.155605387
	I0317 11:14:51.133447    8508 fix.go:229] Guest: 2025-03-17 11:14:51.155605387 +0000 UTC Remote: 2025-03-17 11:14:46.1394743 +0000 UTC m=+569.588783001 (delta=5.016131087s)
	I0317 11:14:51.133447    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m03 ).state
	I0317 11:14:53.370296    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:14:53.370296    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:14:53.370296    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m03 ).networkadapters[0]).ipaddresses[0]
	I0317 11:14:55.953166    8508 main.go:141] libmachine: [stdout =====>] : 172.25.19.102
	
	I0317 11:14:55.953166    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:14:55.960437    8508 main.go:141] libmachine: Using SSH client type: native
	I0317 11:14:55.961153    8508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.19.102 22 <nil> <nil>}
	I0317 11:14:55.961153    8508 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1742210091
	I0317 11:14:56.103828    8508 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Mar 17 11:14:51 UTC 2025
	
	I0317 11:14:56.103828    8508 fix.go:236] clock set: Mon Mar 17 11:14:51 UTC 2025
	 (err=<nil>)
	I0317 11:14:56.103828    8508 start.go:83] releasing machines lock for "ha-450500-m03", held for 2m18.4914218s
	I0317 11:14:56.104405    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m03 ).state
	I0317 11:14:58.286278    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:14:58.286278    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:14:58.286378    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m03 ).networkadapters[0]).ipaddresses[0]
	I0317 11:15:00.905094    8508 main.go:141] libmachine: [stdout =====>] : 172.25.19.102
	
	I0317 11:15:00.905172    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:15:00.908982    8508 out.go:177] * Found network options:
	I0317 11:15:00.913265    8508 out.go:177]   - NO_PROXY=172.25.16.34,172.25.21.189
	W0317 11:15:00.916722    8508 proxy.go:119] fail to check proxy env: Error ip not in block
	W0317 11:15:00.916786    8508 proxy.go:119] fail to check proxy env: Error ip not in block
	I0317 11:15:00.920940    8508 out.go:177]   - NO_PROXY=172.25.16.34,172.25.21.189
	W0317 11:15:00.924920    8508 proxy.go:119] fail to check proxy env: Error ip not in block
	W0317 11:15:00.924920    8508 proxy.go:119] fail to check proxy env: Error ip not in block
	W0317 11:15:00.925951    8508 proxy.go:119] fail to check proxy env: Error ip not in block
	W0317 11:15:00.925951    8508 proxy.go:119] fail to check proxy env: Error ip not in block
	I0317 11:15:00.927912    8508 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0317 11:15:00.928989    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m03 ).state
	I0317 11:15:00.939909    8508 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0317 11:15:00.939909    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500-m03 ).state
	I0317 11:15:03.233672    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:15:03.234099    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:15:03.234099    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m03 ).networkadapters[0]).ipaddresses[0]
	I0317 11:15:03.250398    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:15:03.250398    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:15:03.250398    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500-m03 ).networkadapters[0]).ipaddresses[0]
	I0317 11:15:05.926492    8508 main.go:141] libmachine: [stdout =====>] : 172.25.19.102
	
	I0317 11:15:05.926895    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:15:05.927304    8508 sshutil.go:53] new ssh client: &{IP:172.25.19.102 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500-m03\id_rsa Username:docker}
	I0317 11:15:05.952454    8508 main.go:141] libmachine: [stdout =====>] : 172.25.19.102
	
	I0317 11:15:05.952454    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:15:05.952925    8508 sshutil.go:53] new ssh client: &{IP:172.25.19.102 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500-m03\id_rsa Username:docker}
	I0317 11:15:06.017777    8508 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.0777044s)
	W0317 11:15:06.017777    8508 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0317 11:15:06.035326    8508 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0317 11:15:06.038290    8508 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.1092615s)
	W0317 11:15:06.038290    8508 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0317 11:15:06.067494    8508 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0317 11:15:06.067494    8508 start.go:495] detecting cgroup driver to use...
	I0317 11:15:06.067667    8508 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0317 11:15:06.116328    8508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0317 11:15:06.150225    8508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W0317 11:15:06.160731    8508 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0317 11:15:06.160764    8508 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0317 11:15:06.176524    8508 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0317 11:15:06.193279    8508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0317 11:15:06.226389    8508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0317 11:15:06.259499    8508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0317 11:15:06.294519    8508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0317 11:15:06.326057    8508 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0317 11:15:06.359772    8508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0317 11:15:06.391326    8508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0317 11:15:06.423236    8508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0317 11:15:06.454021    8508 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0317 11:15:06.472904    8508 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0317 11:15:06.485086    8508 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0317 11:15:06.518907    8508 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0317 11:15:06.546483    8508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 11:15:06.760165    8508 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0317 11:15:06.793752    8508 start.go:495] detecting cgroup driver to use...
	I0317 11:15:06.804956    8508 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0317 11:15:06.842822    8508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0317 11:15:06.877224    8508 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0317 11:15:06.920453    8508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0317 11:15:06.961695    8508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0317 11:15:07.001298    8508 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0317 11:15:07.070070    8508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0317 11:15:07.094206    8508 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0317 11:15:07.140566    8508 ssh_runner.go:195] Run: which cri-dockerd
	I0317 11:15:07.159026    8508 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0317 11:15:07.178431    8508 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0317 11:15:07.220820    8508 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0317 11:15:07.411288    8508 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0317 11:15:07.603763    8508 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0317 11:15:07.603763    8508 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0317 11:15:07.646468    8508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 11:15:07.849408    8508 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0317 11:15:10.484639    8508 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6351274s)
	I0317 11:15:10.497265    8508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0317 11:15:10.533379    8508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0317 11:15:10.569879    8508 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0317 11:15:10.786050    8508 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0317 11:15:10.991152    8508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 11:15:11.211713    8508 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0317 11:15:11.264048    8508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0317 11:15:11.299904    8508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 11:15:11.502945    8508 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0317 11:15:11.621306    8508 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0317 11:15:11.634008    8508 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0317 11:15:11.642887    8508 start.go:563] Will wait 60s for crictl version
	I0317 11:15:11.652876    8508 ssh_runner.go:195] Run: which crictl
	I0317 11:15:11.670718    8508 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0317 11:15:11.729357    8508 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0317 11:15:11.737916    8508 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0317 11:15:11.787651    8508 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0317 11:15:11.829755    8508 out.go:235] * Preparing Kubernetes v1.32.2 on Docker 27.4.0 ...
	I0317 11:15:11.832590    8508 out.go:177]   - env NO_PROXY=172.25.16.34
	I0317 11:15:11.835962    8508 out.go:177]   - env NO_PROXY=172.25.16.34,172.25.21.189
	I0317 11:15:11.838027    8508 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0317 11:15:11.842073    8508 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0317 11:15:11.842073    8508 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0317 11:15:11.842073    8508 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0317 11:15:11.842073    8508 ip.go:211] Found interface: {Index:13 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:4b:84:d5 Flags:up|broadcast|multicast|running}
	I0317 11:15:11.844144    8508 ip.go:214] interface addr: fe80::f0c7:c31c:6237:ef35/64
	I0317 11:15:11.844144    8508 ip.go:214] interface addr: 172.25.16.1/20
	I0317 11:15:11.856174    8508 ssh_runner.go:195] Run: grep 172.25.16.1	host.minikube.internal$ /etc/hosts
	I0317 11:15:11.862740    8508 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.25.16.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 11:15:11.885158    8508 mustload.go:65] Loading cluster: ha-450500
	I0317 11:15:11.886072    8508 config.go:182] Loaded profile config "ha-450500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0317 11:15:11.886961    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500 ).state
	I0317 11:15:14.039199    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:15:14.039199    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:15:14.039379    8508 host.go:66] Checking if "ha-450500" exists ...
	I0317 11:15:14.040412    8508 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500 for IP: 172.25.19.102
	I0317 11:15:14.040491    8508 certs.go:194] generating shared ca certs ...
	I0317 11:15:14.040491    8508 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:15:14.041140    8508 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0317 11:15:14.041502    8508 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0317 11:15:14.041502    8508 certs.go:256] generating profile certs ...
	I0317 11:15:14.042105    8508 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\client.key
	I0317 11:15:14.042105    8508 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.key.99204d39
	I0317 11:15:14.042105    8508 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.crt.99204d39 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.25.16.34 172.25.21.189 172.25.19.102 172.25.31.254]
	I0317 11:15:14.240081    8508 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.crt.99204d39 ...
	I0317 11:15:14.240081    8508 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.crt.99204d39: {Name:mk255eb8c6c9ec06403e380d9b5b4bdaba94ffb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:15:14.242749    8508 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.key.99204d39 ...
	I0317 11:15:14.242749    8508 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.key.99204d39: {Name:mk8f0af1d56c3096cbfdc7ace52600645aafb8e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 11:15:14.243735    8508 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.crt.99204d39 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.crt
	I0317 11:15:14.258744    8508 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.key.99204d39 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.key
	I0317 11:15:14.264208    8508 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\proxy-client.key
	I0317 11:15:14.264208    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0317 11:15:14.264208    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0317 11:15:14.265008    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0317 11:15:14.265223    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0317 11:15:14.265434    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0317 11:15:14.265434    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0317 11:15:14.265972    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0317 11:15:14.266178    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0317 11:15:14.266229    8508 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8940.pem (1338 bytes)
	W0317 11:15:14.267031    8508 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8940_empty.pem, impossibly tiny 0 bytes
	I0317 11:15:14.267054    8508 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0317 11:15:14.267054    8508 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0317 11:15:14.267596    8508 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0317 11:15:14.268009    8508 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0317 11:15:14.268009    8508 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem (1708 bytes)
	I0317 11:15:14.268697    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0317 11:15:14.268839    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8940.pem -> /usr/share/ca-certificates/8940.pem
	I0317 11:15:14.268839    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem -> /usr/share/ca-certificates/89402.pem
	I0317 11:15:14.268839    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500 ).state
	I0317 11:15:16.427874    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:15:16.427874    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:15:16.427874    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500 ).networkadapters[0]).ipaddresses[0]
	I0317 11:15:19.024483    8508 main.go:141] libmachine: [stdout =====>] : 172.25.16.34
	
	I0317 11:15:19.024585    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:15:19.025160    8508 sshutil.go:53] new ssh client: &{IP:172.25.16.34 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500\id_rsa Username:docker}
	I0317 11:15:19.129159    8508 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0317 11:15:19.138532    8508 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0317 11:15:19.176902    8508 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0317 11:15:19.183881    8508 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0317 11:15:19.213843    8508 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0317 11:15:19.220438    8508 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0317 11:15:19.249031    8508 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0317 11:15:19.255289    8508 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0317 11:15:19.284043    8508 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0317 11:15:19.290482    8508 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0317 11:15:19.322049    8508 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0317 11:15:19.329399    8508 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0317 11:15:19.348582    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0317 11:15:19.398357    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0317 11:15:19.442075    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0317 11:15:19.496229    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0317 11:15:19.541893    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0317 11:15:19.587925    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0317 11:15:19.633371    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0317 11:15:19.685478    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-450500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0317 11:15:19.734152    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0317 11:15:19.776998    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8940.pem --> /usr/share/ca-certificates/8940.pem (1338 bytes)
	I0317 11:15:19.821388    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem --> /usr/share/ca-certificates/89402.pem (1708 bytes)
	I0317 11:15:19.866321    8508 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0317 11:15:19.897359    8508 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0317 11:15:19.929307    8508 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0317 11:15:19.962392    8508 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0317 11:15:19.992834    8508 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0317 11:15:20.023152    8508 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0317 11:15:20.055363    8508 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0317 11:15:20.096735    8508 ssh_runner.go:195] Run: openssl version
	I0317 11:15:20.115931    8508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0317 11:15:20.146722    8508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0317 11:15:20.154700    8508 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 17 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0317 11:15:20.166295    8508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0317 11:15:20.185407    8508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0317 11:15:20.217335    8508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8940.pem && ln -fs /usr/share/ca-certificates/8940.pem /etc/ssl/certs/8940.pem"
	I0317 11:15:20.246143    8508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8940.pem
	I0317 11:15:20.252963    8508 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 17 10:46 /usr/share/ca-certificates/8940.pem
	I0317 11:15:20.263775    8508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8940.pem
	I0317 11:15:20.284998    8508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8940.pem /etc/ssl/certs/51391683.0"
	I0317 11:15:20.318558    8508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/89402.pem && ln -fs /usr/share/ca-certificates/89402.pem /etc/ssl/certs/89402.pem"
	I0317 11:15:20.351183    8508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/89402.pem
	I0317 11:15:20.359028    8508 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 17 10:46 /usr/share/ca-certificates/89402.pem
	I0317 11:15:20.372308    8508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/89402.pem
	I0317 11:15:20.394494    8508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/89402.pem /etc/ssl/certs/3ec20f2e.0"
	I0317 11:15:20.427544    8508 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0317 11:15:20.434494    8508 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0317 11:15:20.434494    8508 kubeadm.go:934] updating node {m03 172.25.19.102 8443 v1.32.2 docker true true} ...
	I0317 11:15:20.435029    8508 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-450500-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.19.102
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:ha-450500 Namespace:default APIServerHAVIP:172.25.31.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0317 11:15:20.435079    8508 kube-vip.go:115] generating kube-vip config ...
	I0317 11:15:20.446700    8508 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0317 11:15:20.477823    8508 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0317 11:15:20.477823    8508 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.25.31.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.9
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0317 11:15:20.490226    8508 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0317 11:15:20.508399    8508 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.32.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.32.2': No such file or directory
	
	Initiating transfer...
	I0317 11:15:20.520911    8508 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.32.2
	I0317 11:15:20.538872    8508 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet.sha256
	I0317 11:15:20.538872    8508 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256
	I0317 11:15:20.538872    8508 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256
	I0317 11:15:20.539685    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubectl -> /var/lib/minikube/binaries/v1.32.2/kubectl
	I0317 11:15:20.540011    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubeadm -> /var/lib/minikube/binaries/v1.32.2/kubeadm
	I0317 11:15:20.554737    8508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0317 11:15:20.554737    8508 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl
	I0317 11:15:20.554737    8508 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubeadm
	I0317 11:15:20.579481    8508 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubectl': No such file or directory
	I0317 11:15:20.579540    8508 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubeadm': No such file or directory
	I0317 11:15:20.579665    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubectl --> /var/lib/minikube/binaries/v1.32.2/kubectl (57323672 bytes)
	I0317 11:15:20.579792    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubeadm --> /var/lib/minikube/binaries/v1.32.2/kubeadm (70942872 bytes)
	I0317 11:15:20.579953    8508 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubelet -> /var/lib/minikube/binaries/v1.32.2/kubelet
	I0317 11:15:20.591987    8508 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet
	I0317 11:15:20.634302    8508 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubelet': No such file or directory
	I0317 11:15:20.634357    8508 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubelet --> /var/lib/minikube/binaries/v1.32.2/kubelet (77406468 bytes)
	I0317 11:15:22.025520    8508 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0317 11:15:22.051106    8508 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0317 11:15:22.086097    8508 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0317 11:15:22.121756    8508 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0317 11:15:22.170554    8508 ssh_runner.go:195] Run: grep 172.25.31.254	control-plane.minikube.internal$ /etc/hosts
	I0317 11:15:22.177167    8508 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.25.31.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 11:15:22.212326    8508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 11:15:22.430849    8508 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 11:15:22.462141    8508 host.go:66] Checking if "ha-450500" exists ...
	I0317 11:15:22.541128    8508 start.go:317] joinCluster: &{Name:ha-450500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-450500 Namespace:def
ault APIServerHAVIP:172.25.31.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.16.34 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.25.21.189 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.25.19.102 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadge
t:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 11:15:22.541441    8508 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0317 11:15:22.541526    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-450500 ).state
	I0317 11:15:24.742855    8508 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 11:15:24.742855    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:15:24.743509    8508 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-450500 ).networkadapters[0]).ipaddresses[0]
	I0317 11:15:27.331280    8508 main.go:141] libmachine: [stdout =====>] : 172.25.16.34
	
	I0317 11:15:27.331280    8508 main.go:141] libmachine: [stderr =====>] : 
	I0317 11:15:27.331280    8508 sshutil.go:53] new ssh client: &{IP:172.25.16.34 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-450500\id_rsa Username:docker}
	I0317 11:15:27.523577    8508 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm token create --print-join-command --ttl=0": (4.982098s)
	I0317 11:15:27.523577    8508 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.25.19.102 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0317 11:15:27.523577    8508 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 6yyl46.a7oj7eb2wz8mbr99 --discovery-token-ca-cert-hash sha256:c322b0259bb8a6b4c6c1dc77ade13bbf0d2f6b9bd2605c58fcd3743199330256 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-450500-m03 --control-plane --apiserver-advertise-address=172.25.19.102 --apiserver-bind-port=8443"
	I0317 11:16:09.800170    8508 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 6yyl46.a7oj7eb2wz8mbr99 --discovery-token-ca-cert-hash sha256:c322b0259bb8a6b4c6c1dc77ade13bbf0d2f6b9bd2605c58fcd3743199330256 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-450500-m03 --control-plane --apiserver-advertise-address=172.25.19.102 --apiserver-bind-port=8443": (42.2762667s)
	I0317 11:16:09.800170    8508 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0317 11:16:10.848630    8508 ssh_runner.go:235] Completed: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet": (1.0484519s)
	I0317 11:16:10.861795    8508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-450500-m03 minikube.k8s.io/updated_at=2025_03_17T11_16_10_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=28b3ce799b018a38b7c40f89b465976263272e76 minikube.k8s.io/name=ha-450500 minikube.k8s.io/primary=false
	I0317 11:16:11.091287    8508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-450500-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0317 11:16:11.298616    8508 start.go:319] duration metric: took 48.7571112s to joinCluster
	I0317 11:16:11.298856    8508 start.go:235] Will wait 6m0s for node &{Name:m03 IP:172.25.19.102 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0317 11:16:11.300313    8508 config.go:182] Loaded profile config "ha-450500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0317 11:16:11.302139    8508 out.go:177] * Verifying Kubernetes components...
	I0317 11:16:11.322778    8508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 11:16:11.830977    8508 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 11:16:11.882935    8508 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0317 11:16:11.884247    8508 kapi.go:59] client config for ha-450500: &rest.Config{Host:"https://172.25.31.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-450500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-450500\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2e292e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0317 11:16:11.884247    8508 kubeadm.go:483] Overriding stale ClientConfig host https://172.25.31.254:8443 with https://172.25.16.34:8443
	I0317 11:16:11.885758    8508 node_ready.go:35] waiting up to 6m0s for node "ha-450500-m03" to be "Ready" ...
	I0317 11:16:11.886004    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:11.886028    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:11.886028    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:11.886028    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:11.903592    8508 round_trippers.go:581] Response Status: 200 OK in 16 milliseconds
	I0317 11:16:12.387334    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:12.387334    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:12.387334    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:12.387334    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:12.393585    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:16:12.887531    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:12.887531    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:12.887531    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:12.887531    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:12.901476    8508 round_trippers.go:581] Response Status: 200 OK in 13 milliseconds
	I0317 11:16:13.386852    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:13.386852    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:13.386852    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:13.386852    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:13.391730    8508 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 11:16:13.886586    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:13.886586    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:13.886586    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:13.886586    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:13.893048    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:16:13.893048    8508 node_ready.go:53] node "ha-450500-m03" has status "Ready":"False"
	I0317 11:16:14.386817    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:14.386817    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:14.386817    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:14.386817    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:14.468858    8508 round_trippers.go:581] Response Status: 200 OK in 82 milliseconds
	I0317 11:16:14.886405    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:14.886405    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:14.886405    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:14.886405    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:14.891748    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:16:15.385977    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:15.386552    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:15.386552    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:15.386552    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:15.391295    8508 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 11:16:15.886719    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:15.886719    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:15.886719    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:15.886719    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:15.892174    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:16:16.387104    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:16.387203    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:16.387203    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:16.387203    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:16.475472    8508 round_trippers.go:581] Response Status: 200 OK in 88 milliseconds
	I0317 11:16:16.475472    8508 node_ready.go:53] node "ha-450500-m03" has status "Ready":"False"
	I0317 11:16:16.886123    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:16.886123    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:16.886123    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:16.886123    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:16.891788    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:16:17.386941    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:17.386941    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:17.386941    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:17.386941    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:17.394088    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:16:17.886451    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:17.886451    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:17.886451    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:17.886451    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:17.893328    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:16:18.385878    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:18.385878    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:18.385878    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:18.385878    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:18.395417    8508 round_trippers.go:581] Response Status: 200 OK in 9 milliseconds
	I0317 11:16:18.886536    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:18.886536    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:18.886536    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:18.886536    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:18.893985    8508 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0317 11:16:18.894968    8508 node_ready.go:53] node "ha-450500-m03" has status "Ready":"False"
	I0317 11:16:19.387318    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:19.387436    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:19.387436    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:19.387436    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:19.391529    8508 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 11:16:19.886246    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:19.886246    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:19.886246    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:19.886246    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:19.892563    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:16:20.386019    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:20.386019    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:20.386019    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:20.386019    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:20.390740    8508 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 11:16:20.886317    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:20.886317    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:20.886317    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:20.886317    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:20.891455    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:16:21.387211    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:21.387321    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:21.387321    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:21.387321    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:21.392614    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:16:21.393026    8508 node_ready.go:53] node "ha-450500-m03" has status "Ready":"False"
	I0317 11:16:21.886219    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:21.886219    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:21.886219    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:21.886219    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:21.895291    8508 round_trippers.go:581] Response Status: 200 OK in 9 milliseconds
	I0317 11:16:22.386927    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:22.386927    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:22.386927    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:22.386927    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:22.392583    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:16:22.887166    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:22.887219    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:22.887219    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:22.887219    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:22.893309    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:16:23.387394    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:23.387394    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:23.387394    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:23.387394    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:23.399696    8508 round_trippers.go:581] Response Status: 200 OK in 12 milliseconds
	I0317 11:16:23.400321    8508 node_ready.go:53] node "ha-450500-m03" has status "Ready":"False"
	I0317 11:16:23.885990    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:23.885990    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:23.885990    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:23.885990    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:23.891923    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:16:24.386274    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:24.386274    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:24.386274    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:24.386274    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:24.392867    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:16:24.886675    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:24.886675    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:24.886675    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:24.886675    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:24.891639    8508 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 11:16:25.386476    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:25.386476    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:25.386476    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:25.386476    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:25.391607    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:16:25.886873    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:25.886873    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:25.887465    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:25.887465    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:25.893666    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:16:25.893890    8508 node_ready.go:53] node "ha-450500-m03" has status "Ready":"False"
	I0317 11:16:26.386514    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:26.386514    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:26.386514    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:26.386514    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:26.391915    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:16:26.886674    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:26.886674    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:26.886674    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:26.886674    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:26.892406    8508 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 11:16:27.386705    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:27.386705    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:27.386705    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:27.386705    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:27.391834    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:16:27.886317    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:27.886317    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:27.886317    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:27.886317    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:27.892627    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:16:28.386600    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:28.386600    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:28.386600    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:28.386600    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:28.392830    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:16:28.393020    8508 node_ready.go:53] node "ha-450500-m03" has status "Ready":"False"
	I0317 11:16:28.887122    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:28.887122    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:28.887122    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:28.887122    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:28.900264    8508 round_trippers.go:581] Response Status: 200 OK in 13 milliseconds
	I0317 11:16:29.386651    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:29.386651    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:29.386651    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:29.386651    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:29.393241    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:16:29.886583    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:29.886583    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:29.886583    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:29.886583    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:29.902603    8508 round_trippers.go:581] Response Status: 200 OK in 16 milliseconds
	I0317 11:16:30.386446    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:30.386498    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:30.386498    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:30.386585    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:30.403914    8508 round_trippers.go:581] Response Status: 200 OK in 17 milliseconds
	I0317 11:16:30.404142    8508 node_ready.go:53] node "ha-450500-m03" has status "Ready":"False"
	I0317 11:16:30.886312    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:30.886312    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:30.886312    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:30.886312    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:30.892530    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:16:31.386592    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:31.386592    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:31.386592    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:31.386592    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:31.392309    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:16:31.887272    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:31.887272    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:31.887418    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:31.887418    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:31.900494    8508 round_trippers.go:581] Response Status: 200 OK in 13 milliseconds
	I0317 11:16:31.900904    8508 node_ready.go:49] node "ha-450500-m03" has status "Ready":"True"
	I0317 11:16:31.900904    8508 node_ready.go:38] duration metric: took 20.0149314s for node "ha-450500-m03" to be "Ready" ...
	I0317 11:16:31.900904    8508 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 11:16:31.901091    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods
	I0317 11:16:31.901160    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:31.901160    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:31.901160    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:31.909462    8508 round_trippers.go:581] Response Status: 200 OK in 8 milliseconds
	I0317 11:16:31.913224    8508 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-qd2nj" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:31.913290    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-qd2nj
	I0317 11:16:31.913290    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:31.913419    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:31.913419    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:31.925538    8508 round_trippers.go:581] Response Status: 200 OK in 11 milliseconds
	I0317 11:16:31.926010    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500
	I0317 11:16:31.926072    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:31.926141    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:31.926304    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:31.932023    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:16:31.932023    8508 pod_ready.go:93] pod "coredns-668d6bf9bc-qd2nj" in "kube-system" namespace has status "Ready":"True"
	I0317 11:16:31.932023    8508 pod_ready.go:82] duration metric: took 18.7325ms for pod "coredns-668d6bf9bc-qd2nj" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:31.932023    8508 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-rhhkv" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:31.932023    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-rhhkv
	I0317 11:16:31.932023    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:31.932023    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:31.932023    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:31.936805    8508 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 11:16:31.936805    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500
	I0317 11:16:31.936805    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:31.936805    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:31.936805    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:31.941339    8508 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 11:16:31.941628    8508 pod_ready.go:93] pod "coredns-668d6bf9bc-rhhkv" in "kube-system" namespace has status "Ready":"True"
	I0317 11:16:31.941687    8508 pod_ready.go:82] duration metric: took 9.6644ms for pod "coredns-668d6bf9bc-rhhkv" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:31.941687    8508 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-450500" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:31.941839    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/etcd-ha-450500
	I0317 11:16:31.941855    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:31.941855    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:31.941855    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:31.945778    8508 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0317 11:16:31.945849    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500
	I0317 11:16:31.945849    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:31.945849    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:31.945849    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:31.948565    8508 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0317 11:16:31.949812    8508 pod_ready.go:93] pod "etcd-ha-450500" in "kube-system" namespace has status "Ready":"True"
	I0317 11:16:31.949881    8508 pod_ready.go:82] duration metric: took 8.194ms for pod "etcd-ha-450500" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:31.949881    8508 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-450500-m02" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:31.949990    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/etcd-ha-450500-m02
	I0317 11:16:31.950042    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:31.950042    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:31.950082    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:31.955371    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:16:31.955965    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:16:31.956013    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:31.956013    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:31.956044    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:31.959317    8508 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0317 11:16:31.959317    8508 pod_ready.go:93] pod "etcd-ha-450500-m02" in "kube-system" namespace has status "Ready":"True"
	I0317 11:16:31.959317    8508 pod_ready.go:82] duration metric: took 9.4359ms for pod "etcd-ha-450500-m02" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:31.959317    8508 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-450500-m03" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:32.088255    8508 request.go:661] Waited for 128.9363ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/etcd-ha-450500-m03
	I0317 11:16:32.088255    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/etcd-ha-450500-m03
	I0317 11:16:32.088255    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:32.088255    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:32.088255    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:32.095739    8508 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0317 11:16:32.287665    8508 request.go:661] Waited for 191.3367ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:32.287665    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:32.287665    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:32.287665    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:32.287665    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:32.299344    8508 round_trippers.go:581] Response Status: 200 OK in 11 milliseconds
	I0317 11:16:32.299344    8508 pod_ready.go:93] pod "etcd-ha-450500-m03" in "kube-system" namespace has status "Ready":"True"
	I0317 11:16:32.299344    8508 pod_ready.go:82] duration metric: took 340.024ms for pod "etcd-ha-450500-m03" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:32.299344    8508 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-450500" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:32.487244    8508 request.go:661] Waited for 186.6554ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-450500
	I0317 11:16:32.487244    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-450500
	I0317 11:16:32.487244    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:32.487244    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:32.487244    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:32.495674    8508 round_trippers.go:581] Response Status: 200 OK in 8 milliseconds
	I0317 11:16:32.688006    8508 request.go:661] Waited for 192.3307ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/nodes/ha-450500
	I0317 11:16:32.688006    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500
	I0317 11:16:32.688006    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:32.688006    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:32.688006    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:32.695816    8508 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0317 11:16:32.696361    8508 pod_ready.go:93] pod "kube-apiserver-ha-450500" in "kube-system" namespace has status "Ready":"True"
	I0317 11:16:32.696361    8508 pod_ready.go:82] duration metric: took 397.0145ms for pod "kube-apiserver-ha-450500" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:32.696361    8508 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-450500-m02" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:32.888084    8508 request.go:661] Waited for 191.505ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-450500-m02
	I0317 11:16:32.888084    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-450500-m02
	I0317 11:16:32.888084    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:32.888084    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:32.888084    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:32.894542    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:16:33.087450    8508 request.go:661] Waited for 192.3175ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:16:33.087450    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:16:33.087818    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:33.087818    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:33.087818    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:33.093980    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:16:33.094323    8508 pod_ready.go:93] pod "kube-apiserver-ha-450500-m02" in "kube-system" namespace has status "Ready":"True"
	I0317 11:16:33.094323    8508 pod_ready.go:82] duration metric: took 397.8266ms for pod "kube-apiserver-ha-450500-m02" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:33.094442    8508 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-450500-m03" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:33.287377    8508 request.go:661] Waited for 192.8534ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-450500-m03
	I0317 11:16:33.287377    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-450500-m03
	I0317 11:16:33.287377    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:33.287377    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:33.287377    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:33.293425    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:16:33.488085    8508 request.go:661] Waited for 194.0931ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:33.488085    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:33.488085    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:33.488085    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:33.488085    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:33.493007    8508 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 11:16:33.493929    8508 pod_ready.go:93] pod "kube-apiserver-ha-450500-m03" in "kube-system" namespace has status "Ready":"True"
	I0317 11:16:33.494052    8508 pod_ready.go:82] duration metric: took 399.484ms for pod "kube-apiserver-ha-450500-m03" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:33.494052    8508 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-450500" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:33.687763    8508 request.go:661] Waited for 193.7099ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-450500
	I0317 11:16:33.687763    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-450500
	I0317 11:16:33.687763    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:33.687763    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:33.687763    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:33.693730    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:16:33.887189    8508 request.go:661] Waited for 192.8194ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/nodes/ha-450500
	I0317 11:16:33.887688    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500
	I0317 11:16:33.887755    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:33.887755    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:33.887755    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:33.893329    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:16:33.893754    8508 pod_ready.go:93] pod "kube-controller-manager-ha-450500" in "kube-system" namespace has status "Ready":"True"
	I0317 11:16:33.893837    8508 pod_ready.go:82] duration metric: took 399.7819ms for pod "kube-controller-manager-ha-450500" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:33.893837    8508 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-450500-m02" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:34.087363    8508 request.go:661] Waited for 193.436ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-450500-m02
	I0317 11:16:34.087363    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-450500-m02
	I0317 11:16:34.087363    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:34.087363    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:34.087363    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:34.095462    8508 round_trippers.go:581] Response Status: 200 OK in 8 milliseconds
	I0317 11:16:34.287197    8508 request.go:661] Waited for 191.1807ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:16:34.287197    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:16:34.287197    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:34.287197    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:34.287197    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:34.293003    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:16:34.293762    8508 pod_ready.go:93] pod "kube-controller-manager-ha-450500-m02" in "kube-system" namespace has status "Ready":"True"
	I0317 11:16:34.293843    8508 pod_ready.go:82] duration metric: took 400.0032ms for pod "kube-controller-manager-ha-450500-m02" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:34.293843    8508 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-450500-m03" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:34.487178    8508 request.go:661] Waited for 193.2463ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-450500-m03
	I0317 11:16:34.487178    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-450500-m03
	I0317 11:16:34.487178    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:34.487178    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:34.487178    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:34.493003    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:16:34.687719    8508 request.go:661] Waited for 194.1884ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:34.688166    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:34.688230    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:34.688230    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:34.688230    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:34.694427    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:16:34.694980    8508 pod_ready.go:93] pod "kube-controller-manager-ha-450500-m03" in "kube-system" namespace has status "Ready":"True"
	I0317 11:16:34.694980    8508 pod_ready.go:82] duration metric: took 401.1338ms for pod "kube-controller-manager-ha-450500-m03" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:34.694980    8508 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fthkw" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:34.887883    8508 request.go:661] Waited for 192.7972ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fthkw
	I0317 11:16:34.887883    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fthkw
	I0317 11:16:34.887883    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:34.887883    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:34.887883    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:34.893669    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:16:35.088086    8508 request.go:661] Waited for 193.8774ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:16:35.088086    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:16:35.088086    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:35.088086    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:35.088086    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:35.095339    8508 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0317 11:16:35.095844    8508 pod_ready.go:93] pod "kube-proxy-fthkw" in "kube-system" namespace has status "Ready":"True"
	I0317 11:16:35.095844    8508 pod_ready.go:82] duration metric: took 400.7573ms for pod "kube-proxy-fthkw" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:35.095844    8508 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-jzvxr" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:35.287641    8508 request.go:661] Waited for 191.6027ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jzvxr
	I0317 11:16:35.287641    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jzvxr
	I0317 11:16:35.287641    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:35.288146    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:35.288146    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:35.293351    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:16:35.487876    8508 request.go:661] Waited for 194.0706ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/nodes/ha-450500
	I0317 11:16:35.487876    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500
	I0317 11:16:35.488460    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:35.488460    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:35.488460    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:35.492905    8508 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 11:16:35.494059    8508 pod_ready.go:93] pod "kube-proxy-jzvxr" in "kube-system" namespace has status "Ready":"True"
	I0317 11:16:35.494059    8508 pod_ready.go:82] duration metric: took 398.2116ms for pod "kube-proxy-jzvxr" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:35.494059    8508 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ktktm" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:35.687715    8508 request.go:661] Waited for 193.6539ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ktktm
	I0317 11:16:35.687715    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ktktm
	I0317 11:16:35.687715    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:35.687715    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:35.687715    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:35.694200    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:16:35.887633    8508 request.go:661] Waited for 192.7209ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:35.887633    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:35.888210    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:35.888210    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:35.888210    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:35.892941    8508 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 11:16:35.895085    8508 pod_ready.go:93] pod "kube-proxy-ktktm" in "kube-system" namespace has status "Ready":"True"
	I0317 11:16:35.895085    8508 pod_ready.go:82] duration metric: took 401.0231ms for pod "kube-proxy-ktktm" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:35.895144    8508 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-450500" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:36.088002    8508 request.go:661] Waited for 192.764ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-450500
	I0317 11:16:36.088002    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-450500
	I0317 11:16:36.088002    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:36.088002    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:36.088002    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:36.094659    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:16:36.287520    8508 request.go:661] Waited for 192.1951ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/nodes/ha-450500
	I0317 11:16:36.288335    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500
	I0317 11:16:36.288335    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:36.288335    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:36.288335    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:36.294581    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:16:36.294581    8508 pod_ready.go:93] pod "kube-scheduler-ha-450500" in "kube-system" namespace has status "Ready":"True"
	I0317 11:16:36.294581    8508 pod_ready.go:82] duration metric: took 399.4341ms for pod "kube-scheduler-ha-450500" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:36.294581    8508 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-450500-m02" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:36.488440    8508 request.go:661] Waited for 193.8577ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-450500-m02
	I0317 11:16:36.488440    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-450500-m02
	I0317 11:16:36.488440    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:36.488440    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:36.488440    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:36.495062    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:16:36.687645    8508 request.go:661] Waited for 191.3673ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:16:36.687645    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m02
	I0317 11:16:36.687645    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:36.687645    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:36.687645    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:36.694499    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:16:36.694825    8508 pod_ready.go:93] pod "kube-scheduler-ha-450500-m02" in "kube-system" namespace has status "Ready":"True"
	I0317 11:16:36.694950    8508 pod_ready.go:82] duration metric: took 400.241ms for pod "kube-scheduler-ha-450500-m02" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:36.694950    8508 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-450500-m03" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:36.888199    8508 request.go:661] Waited for 193.247ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-450500-m03
	I0317 11:16:36.888199    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-450500-m03
	I0317 11:16:36.888199    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:36.888199    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:36.888199    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:36.895067    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:16:37.087623    8508 request.go:661] Waited for 192.0996ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:37.087623    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes/ha-450500-m03
	I0317 11:16:37.088122    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:37.088163    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:37.088163    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:37.096672    8508 round_trippers.go:581] Response Status: 200 OK in 8 milliseconds
	I0317 11:16:37.097577    8508 pod_ready.go:93] pod "kube-scheduler-ha-450500-m03" in "kube-system" namespace has status "Ready":"True"
	I0317 11:16:37.097695    8508 pod_ready.go:82] duration metric: took 402.7418ms for pod "kube-scheduler-ha-450500-m03" in "kube-system" namespace to be "Ready" ...
	I0317 11:16:37.097695    8508 pod_ready.go:39] duration metric: took 5.1966538s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 11:16:37.097829    8508 api_server.go:52] waiting for apiserver process to appear ...
	I0317 11:16:37.109517    8508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 11:16:37.139738    8508 api_server.go:72] duration metric: took 25.8406808s to wait for apiserver process to appear ...
	I0317 11:16:37.139738    8508 api_server.go:88] waiting for apiserver healthz status ...
	I0317 11:16:37.139738    8508 api_server.go:253] Checking apiserver healthz at https://172.25.16.34:8443/healthz ...
	I0317 11:16:37.147832    8508 api_server.go:279] https://172.25.16.34:8443/healthz returned 200:
	ok
	I0317 11:16:37.147991    8508 round_trippers.go:470] GET https://172.25.16.34:8443/version
	I0317 11:16:37.148015    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:37.148015    8508 round_trippers.go:480]     Accept: application/json, */*
	I0317 11:16:37.148015    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:37.148675    8508 round_trippers.go:581] Response Status: 200 OK in 0 milliseconds
	I0317 11:16:37.149742    8508 api_server.go:141] control plane version: v1.32.2
	I0317 11:16:37.149933    8508 api_server.go:131] duration metric: took 10.1947ms to wait for apiserver health ...
	I0317 11:16:37.149933    8508 system_pods.go:43] waiting for kube-system pods to appear ...
	I0317 11:16:37.288150    8508 request.go:661] Waited for 138.0872ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods
	I0317 11:16:37.288150    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods
	I0317 11:16:37.288150    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:37.288150    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:37.288150    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:37.294610    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:16:37.298851    8508 system_pods.go:59] 24 kube-system pods found
	I0317 11:16:37.298851    8508 system_pods.go:61] "coredns-668d6bf9bc-qd2nj" [1f982191-c45a-4681-907d-a0d9220b1f77] Running
	I0317 11:16:37.298851    8508 system_pods.go:61] "coredns-668d6bf9bc-rhhkv" [0dc113a4-430f-4c5b-bc05-05d8cc014ed7] Running
	I0317 11:16:37.298851    8508 system_pods.go:61] "etcd-ha-450500" [7a735c5c-89ec-488d-95c8-f7fa1160fa3c] Running
	I0317 11:16:37.298851    8508 system_pods.go:61] "etcd-ha-450500-m02" [78234926-bd4c-41f7-9f48-43e2bcd543a4] Running
	I0317 11:16:37.298851    8508 system_pods.go:61] "etcd-ha-450500-m03" [c18e3ae6-30ed-44d8-8c4a-5dad20e962f9] Running
	I0317 11:16:37.298851    8508 system_pods.go:61] "kindnet-94r58" [4b18e7c6-4105-4037-8742-d58ad9eda200] Running
	I0317 11:16:37.298851    8508 system_pods.go:61] "kindnet-ch8f7" [6247c683-e723-4b72-b373-89cb4f1b576d] Running
	I0317 11:16:37.298851    8508 system_pods.go:61] "kindnet-prwhr" [0f7a825d-bd7c-4428-8685-cfa8926ef827] Running
	I0317 11:16:37.298851    8508 system_pods.go:61] "kube-apiserver-ha-450500" [232d0746-fa49-4d17-b36e-557164865a8f] Running
	I0317 11:16:37.298851    8508 system_pods.go:61] "kube-apiserver-ha-450500-m02" [a445c598-90ff-4e54-a96f-3d206a54a108] Running
	I0317 11:16:37.298851    8508 system_pods.go:61] "kube-apiserver-ha-450500-m03" [96fcebba-cad8-4023-b7f7-08ac83263448] Running
	I0317 11:16:37.298851    8508 system_pods.go:61] "kube-controller-manager-ha-450500" [afff30e2-5638-4b0f-bbe0-ec65ff25eef4] Running
	I0317 11:16:37.298851    8508 system_pods.go:61] "kube-controller-manager-ha-450500-m02" [e931480a-7ff2-4264-bef5-ff129c603e77] Running
	I0317 11:16:37.298851    8508 system_pods.go:61] "kube-controller-manager-ha-450500-m03" [cf875e54-a5d1-48bb-906b-f0be64a0d579] Running
	I0317 11:16:37.298851    8508 system_pods.go:61] "kube-proxy-fthkw" [8a9b1cd2-2eb8-49ac-8cc5-df138d6d0670] Running
	I0317 11:16:37.298851    8508 system_pods.go:61] "kube-proxy-jzvxr" [eeae069e-8b5a-449e-9fe2-ecafcd9733eb] Running
	I0317 11:16:37.298851    8508 system_pods.go:61] "kube-proxy-ktktm" [2900bbaf-f433-41ca-a7f2-8491834c1c3d] Running
	I0317 11:16:37.298851    8508 system_pods.go:61] "kube-scheduler-ha-450500" [66d11ee5-babc-401c-bf3f-8bd94eb09e06] Running
	I0317 11:16:37.298851    8508 system_pods.go:61] "kube-scheduler-ha-450500-m02" [053f7a43-bff9-4dc7-8d8d-126f265dfbed] Running
	I0317 11:16:37.298851    8508 system_pods.go:61] "kube-scheduler-ha-450500-m03" [ee967b5b-f00f-4680-beec-6d938e449577] Running
	I0317 11:16:37.298851    8508 system_pods.go:61] "kube-vip-ha-450500" [55e8c247-e66e-47b2-b766-ead538ec0b9a] Running
	I0317 11:16:37.298851    8508 system_pods.go:61] "kube-vip-ha-450500-m02" [df1997ea-6b13-4caa-a041-2c63da793276] Running
	I0317 11:16:37.298851    8508 system_pods.go:61] "kube-vip-ha-450500-m03" [4153bf28-7e36-4ff9-9ddd-334201353a29] Running
	I0317 11:16:37.298851    8508 system_pods.go:61] "storage-provisioner" [b1a4725e-cedf-428a-a320-59d7374cba0d] Running
	I0317 11:16:37.298851    8508 system_pods.go:74] duration metric: took 148.9171ms to wait for pod list to return data ...
	I0317 11:16:37.298851    8508 default_sa.go:34] waiting for default service account to be created ...
	I0317 11:16:37.487397    8508 request.go:661] Waited for 187.5606ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/namespaces/default/serviceaccounts
	I0317 11:16:37.487397    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/default/serviceaccounts
	I0317 11:16:37.487397    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:37.487397    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:37.487397    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:37.492424    8508 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 11:16:37.493202    8508 default_sa.go:45] found service account: "default"
	I0317 11:16:37.493202    8508 default_sa.go:55] duration metric: took 194.3496ms for default service account to be created ...
	I0317 11:16:37.493411    8508 system_pods.go:116] waiting for k8s-apps to be running ...
	I0317 11:16:37.688075    8508 request.go:661] Waited for 194.6258ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods
	I0317 11:16:37.688732    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/namespaces/kube-system/pods
	I0317 11:16:37.688732    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:37.688732    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:37.688934    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:37.695994    8508 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0317 11:16:37.698469    8508 system_pods.go:86] 24 kube-system pods found
	I0317 11:16:37.698568    8508 system_pods.go:89] "coredns-668d6bf9bc-qd2nj" [1f982191-c45a-4681-907d-a0d9220b1f77] Running
	I0317 11:16:37.698568    8508 system_pods.go:89] "coredns-668d6bf9bc-rhhkv" [0dc113a4-430f-4c5b-bc05-05d8cc014ed7] Running
	I0317 11:16:37.698568    8508 system_pods.go:89] "etcd-ha-450500" [7a735c5c-89ec-488d-95c8-f7fa1160fa3c] Running
	I0317 11:16:37.698568    8508 system_pods.go:89] "etcd-ha-450500-m02" [78234926-bd4c-41f7-9f48-43e2bcd543a4] Running
	I0317 11:16:37.698568    8508 system_pods.go:89] "etcd-ha-450500-m03" [c18e3ae6-30ed-44d8-8c4a-5dad20e962f9] Running
	I0317 11:16:37.698568    8508 system_pods.go:89] "kindnet-94r58" [4b18e7c6-4105-4037-8742-d58ad9eda200] Running
	I0317 11:16:37.698568    8508 system_pods.go:89] "kindnet-ch8f7" [6247c683-e723-4b72-b373-89cb4f1b576d] Running
	I0317 11:16:37.698568    8508 system_pods.go:89] "kindnet-prwhr" [0f7a825d-bd7c-4428-8685-cfa8926ef827] Running
	I0317 11:16:37.698716    8508 system_pods.go:89] "kube-apiserver-ha-450500" [232d0746-fa49-4d17-b36e-557164865a8f] Running
	I0317 11:16:37.698716    8508 system_pods.go:89] "kube-apiserver-ha-450500-m02" [a445c598-90ff-4e54-a96f-3d206a54a108] Running
	I0317 11:16:37.698716    8508 system_pods.go:89] "kube-apiserver-ha-450500-m03" [96fcebba-cad8-4023-b7f7-08ac83263448] Running
	I0317 11:16:37.698716    8508 system_pods.go:89] "kube-controller-manager-ha-450500" [afff30e2-5638-4b0f-bbe0-ec65ff25eef4] Running
	I0317 11:16:37.698822    8508 system_pods.go:89] "kube-controller-manager-ha-450500-m02" [e931480a-7ff2-4264-bef5-ff129c603e77] Running
	I0317 11:16:37.698822    8508 system_pods.go:89] "kube-controller-manager-ha-450500-m03" [cf875e54-a5d1-48bb-906b-f0be64a0d579] Running
	I0317 11:16:37.698822    8508 system_pods.go:89] "kube-proxy-fthkw" [8a9b1cd2-2eb8-49ac-8cc5-df138d6d0670] Running
	I0317 11:16:37.698822    8508 system_pods.go:89] "kube-proxy-jzvxr" [eeae069e-8b5a-449e-9fe2-ecafcd9733eb] Running
	I0317 11:16:37.698822    8508 system_pods.go:89] "kube-proxy-ktktm" [2900bbaf-f433-41ca-a7f2-8491834c1c3d] Running
	I0317 11:16:37.698937    8508 system_pods.go:89] "kube-scheduler-ha-450500" [66d11ee5-babc-401c-bf3f-8bd94eb09e06] Running
	I0317 11:16:37.698937    8508 system_pods.go:89] "kube-scheduler-ha-450500-m02" [053f7a43-bff9-4dc7-8d8d-126f265dfbed] Running
	I0317 11:16:37.698937    8508 system_pods.go:89] "kube-scheduler-ha-450500-m03" [ee967b5b-f00f-4680-beec-6d938e449577] Running
	I0317 11:16:37.698937    8508 system_pods.go:89] "kube-vip-ha-450500" [55e8c247-e66e-47b2-b766-ead538ec0b9a] Running
	I0317 11:16:37.698937    8508 system_pods.go:89] "kube-vip-ha-450500-m02" [df1997ea-6b13-4caa-a041-2c63da793276] Running
	I0317 11:16:37.698937    8508 system_pods.go:89] "kube-vip-ha-450500-m03" [4153bf28-7e36-4ff9-9ddd-334201353a29] Running
	I0317 11:16:37.699042    8508 system_pods.go:89] "storage-provisioner" [b1a4725e-cedf-428a-a320-59d7374cba0d] Running
	I0317 11:16:37.699042    8508 system_pods.go:126] duration metric: took 205.6293ms to wait for k8s-apps to be running ...
	I0317 11:16:37.699042    8508 system_svc.go:44] waiting for kubelet service to be running ....
	I0317 11:16:37.712405    8508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0317 11:16:37.736979    8508 system_svc.go:56] duration metric: took 37.937ms WaitForService to wait for kubelet
	I0317 11:16:37.736979    8508 kubeadm.go:582] duration metric: took 26.437917s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0317 11:16:37.736979    8508 node_conditions.go:102] verifying NodePressure condition ...
	I0317 11:16:37.888682    8508 request.go:661] Waited for 151.7019ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.34:8443/api/v1/nodes
	I0317 11:16:37.888682    8508 round_trippers.go:470] GET https://172.25.16.34:8443/api/v1/nodes
	I0317 11:16:37.888682    8508 round_trippers.go:476] Request Headers:
	I0317 11:16:37.888682    8508 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 11:16:37.888682    8508 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 11:16:37.894930    8508 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 11:16:37.894930    8508 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0317 11:16:37.894930    8508 node_conditions.go:123] node cpu capacity is 2
	I0317 11:16:37.894930    8508 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0317 11:16:37.894930    8508 node_conditions.go:123] node cpu capacity is 2
	I0317 11:16:37.894930    8508 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0317 11:16:37.895482    8508 node_conditions.go:123] node cpu capacity is 2
	I0317 11:16:37.895482    8508 node_conditions.go:105] duration metric: took 158.5014ms to run NodePressure ...
	I0317 11:16:37.895482    8508 start.go:241] waiting for startup goroutines ...
	I0317 11:16:37.895556    8508 start.go:255] writing updated cluster config ...
	I0317 11:16:37.907575    8508 ssh_runner.go:195] Run: rm -f paused
	I0317 11:16:38.055753    8508 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0317 11:16:38.060861    8508 out.go:177] * Done! kubectl is now configured to use "ha-450500" cluster and "default" namespace by default
	
	
	==> Docker <==
	Mar 17 11:08:54 ha-450500 cri-dockerd[1348]: time="2025-03-17T11:08:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a48ca0bd821650855c3c1b374387b1c204e09ce396b8392b93b3b1d1fede54ec/resolv.conf as [nameserver 172.25.16.1]"
	Mar 17 11:08:54 ha-450500 cri-dockerd[1348]: time="2025-03-17T11:08:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/65fd3bc7770e1e6f1a34dd73c3fcb5263502b371e093fff8b9cc592c1f36c9a0/resolv.conf as [nameserver 172.25.16.1]"
	Mar 17 11:08:54 ha-450500 cri-dockerd[1348]: time="2025-03-17T11:08:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b54c1f80fa563c0fad7d88ba3b16ab3cef5d1eab286511fe3d8a68198abbab03/resolv.conf as [nameserver 172.25.16.1]"
	Mar 17 11:08:55 ha-450500 dockerd[1454]: time="2025-03-17T11:08:55.100638769Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 17 11:08:55 ha-450500 dockerd[1454]: time="2025-03-17T11:08:55.100906369Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 17 11:08:55 ha-450500 dockerd[1454]: time="2025-03-17T11:08:55.100935769Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 17 11:08:55 ha-450500 dockerd[1454]: time="2025-03-17T11:08:55.101271569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 17 11:08:55 ha-450500 dockerd[1454]: time="2025-03-17T11:08:55.211537851Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 17 11:08:55 ha-450500 dockerd[1454]: time="2025-03-17T11:08:55.211627651Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 17 11:08:55 ha-450500 dockerd[1454]: time="2025-03-17T11:08:55.211642551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 17 11:08:55 ha-450500 dockerd[1454]: time="2025-03-17T11:08:55.211743251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 17 11:08:55 ha-450500 dockerd[1454]: time="2025-03-17T11:08:55.272664697Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 17 11:08:55 ha-450500 dockerd[1454]: time="2025-03-17T11:08:55.272793597Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 17 11:08:55 ha-450500 dockerd[1454]: time="2025-03-17T11:08:55.272830497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 17 11:08:55 ha-450500 dockerd[1454]: time="2025-03-17T11:08:55.273571297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 17 11:17:16 ha-450500 dockerd[1454]: time="2025-03-17T11:17:16.709348420Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 17 11:17:16 ha-450500 dockerd[1454]: time="2025-03-17T11:17:16.709519821Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 17 11:17:16 ha-450500 dockerd[1454]: time="2025-03-17T11:17:16.710228026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 17 11:17:16 ha-450500 dockerd[1454]: time="2025-03-17T11:17:16.712931146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 17 11:17:16 ha-450500 cri-dockerd[1348]: time="2025-03-17T11:17:16Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7b175ad9e9b3c38a1c5218d48a3b59d2c6c88923d3d1235885b096672f542e7e/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Mar 17 11:17:19 ha-450500 cri-dockerd[1348]: time="2025-03-17T11:17:19Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Mar 17 11:17:20 ha-450500 dockerd[1454]: time="2025-03-17T11:17:20.227331776Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 17 11:17:20 ha-450500 dockerd[1454]: time="2025-03-17T11:17:20.227524978Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 17 11:17:20 ha-450500 dockerd[1454]: time="2025-03-17T11:17:20.227548679Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 17 11:17:20 ha-450500 dockerd[1454]: time="2025-03-17T11:17:20.228548791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ac5beadc15387       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   18 minutes ago      Running             busybox                   0                   7b175ad9e9b3c       busybox-58667487b6-w6ngz
	8b6dc12f0f0ae       c69fa2e9cbf5f                                                                                         26 minutes ago      Running             coredns                   0                   b54c1f80fa563       coredns-668d6bf9bc-qd2nj
	c96833115608b       c69fa2e9cbf5f                                                                                         26 minutes ago      Running             coredns                   0                   a48ca0bd82165       coredns-668d6bf9bc-rhhkv
	bb5aa5f55fea9       6e38f40d628db                                                                                         26 minutes ago      Running             storage-provisioner       0                   65fd3bc7770e1       storage-provisioner
	f00705dba2c6f       kindest/kindnetd@sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495              26 minutes ago      Running             kindnet-cni               0                   f3e6d9f300163       kindnet-prwhr
	fe97a5e85c404       f1332858868e1                                                                                         27 minutes ago      Running             kube-proxy                0                   4436ea277f3d0       kube-proxy-jzvxr
	7409d75987fc7       ghcr.io/kube-vip/kube-vip@sha256:717b8bef2758c10042d64ae7949201ef7f243d928fce265b04e488e844bf9528     27 minutes ago      Running             kube-vip                  0                   f8d96e2c076f1       kube-vip-ha-450500
	b11cf03bfdb6e       a9e7e6b294baf                                                                                         27 minutes ago      Running             etcd                      0                   48c77eb7fa6a3       etcd-ha-450500
	b3f198d2c66ea       85b7a174738ba                                                                                         27 minutes ago      Running             kube-apiserver            0                   fb904acbea4b4       kube-apiserver-ha-450500
	c94d28127c400       b6a454c5a800d                                                                                         27 minutes ago      Running             kube-controller-manager   0                   aeb305ea186a6       kube-controller-manager-ha-450500
	42fa7c58af327       d8e673e7c9983                                                                                         27 minutes ago      Running             kube-scheduler            0                   053e0f10ab0a0       kube-scheduler-ha-450500
	
	
	==> coredns [8b6dc12f0f0a] <==
	[INFO] 10.244.2.2:33333 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.09351468s
	[INFO] 10.244.2.2:36375 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000086301s
	[INFO] 10.244.2.2:55168 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000157302s
	[INFO] 10.244.1.2:33030 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000231303s
	[INFO] 10.244.1.2:33270 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.054625989s
	[INFO] 10.244.1.2:52717 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000213403s
	[INFO] 10.244.1.2:57766 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000219703s
	[INFO] 10.244.1.2:55057 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000208902s
	[INFO] 10.244.1.2:44848 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000097901s
	[INFO] 10.244.0.4:42535 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000103302s
	[INFO] 10.244.0.4:58382 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000334304s
	[INFO] 10.244.0.4:50531 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000198003s
	[INFO] 10.244.0.4:35022 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000252903s
	[INFO] 10.244.2.2:53460 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000084802s
	[INFO] 10.244.2.2:55347 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.117711883s
	[INFO] 10.244.2.2:47928 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000131901s
	[INFO] 10.244.2.2:60937 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000648408s
	[INFO] 10.244.1.2:51522 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000189103s
	[INFO] 10.244.1.2:59967 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000067301s
	[INFO] 10.244.0.4:52392 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000280604s
	[INFO] 10.244.1.2:35122 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123601s
	[INFO] 10.244.1.2:37968 - 5 "PTR IN 1.16.25.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000171002s
	[INFO] 10.244.0.4:37046 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000272104s
	[INFO] 10.244.2.2:47382 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000175203s
	[INFO] 10.244.2.2:35633 - 5 "PTR IN 1.16.25.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000255103s
	
	
	==> coredns [c96833115608] <==
	[INFO] 10.244.1.2:34400 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000153602s
	[INFO] 10.244.0.4:41325 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115602s
	[INFO] 10.244.0.4:39113 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000069801s
	[INFO] 10.244.0.4:48614 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000197902s
	[INFO] 10.244.0.4:36131 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000123701s
	[INFO] 10.244.2.2:60933 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000164402s
	[INFO] 10.244.2.2:43194 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000138602s
	[INFO] 10.244.2.2:44880 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000230302s
	[INFO] 10.244.2.2:41107 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000305204s
	[INFO] 10.244.1.2:37068 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000173802s
	[INFO] 10.244.1.2:47394 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000121202s
	[INFO] 10.244.0.4:54230 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000164902s
	[INFO] 10.244.0.4:60421 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000217603s
	[INFO] 10.244.0.4:41013 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000099902s
	[INFO] 10.244.2.2:34058 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123702s
	[INFO] 10.244.2.2:55260 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000268603s
	[INFO] 10.244.2.2:44536 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000118501s
	[INFO] 10.244.2.2:53382 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000070701s
	[INFO] 10.244.1.2:57646 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000178202s
	[INFO] 10.244.1.2:33199 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000239303s
	[INFO] 10.244.0.4:55568 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145201s
	[INFO] 10.244.0.4:43872 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000246803s
	[INFO] 10.244.0.4:50569 - 5 "PTR IN 1.16.25.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000147402s
	[INFO] 10.244.2.2:35716 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000162402s
	[INFO] 10.244.2.2:40961 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000375505s
	
	
	==> describe nodes <==
	Name:               ha-450500
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-450500
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=28b3ce799b018a38b7c40f89b465976263272e76
	                    minikube.k8s.io/name=ha-450500
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_03_17T11_08_27_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Mar 2025 11:08:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-450500
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Mar 2025 11:35:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Mar 2025 11:32:03 +0000   Mon, 17 Mar 2025 11:08:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Mar 2025 11:32:03 +0000   Mon, 17 Mar 2025 11:08:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Mar 2025 11:32:03 +0000   Mon, 17 Mar 2025 11:08:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Mar 2025 11:32:03 +0000   Mon, 17 Mar 2025 11:08:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.25.16.34
	  Hostname:    ha-450500
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 086b42e2f51840d48852f2b6b010a1c5
	  System UUID:                b88424f0-f6b6-e042-a8c7-9b475f6d85d7
	  Boot ID:                    e0170758-1bda-40b1-bc10-3eb7052a9b72
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-w6ngz             0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 coredns-668d6bf9bc-qd2nj             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     27m
	  kube-system                 coredns-668d6bf9bc-rhhkv             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     27m
	  kube-system                 etcd-ha-450500                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         27m
	  kube-system                 kindnet-prwhr                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      27m
	  kube-system                 kube-apiserver-ha-450500             250m (12%)    0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-controller-manager-ha-450500    200m (10%)    0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-proxy-jzvxr                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-scheduler-ha-450500             100m (5%)     0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-vip-ha-450500                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 27m   kube-proxy       
	  Normal  Starting                 27m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  27m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  27m   kubelet          Node ha-450500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m   kubelet          Node ha-450500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m   kubelet          Node ha-450500 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27m   node-controller  Node ha-450500 event: Registered Node ha-450500 in Controller
	  Normal  NodeReady                26m   kubelet          Node ha-450500 status is now: NodeReady
	  Normal  RegisteredNode           23m   node-controller  Node ha-450500 event: Registered Node ha-450500 in Controller
	  Normal  RegisteredNode           19m   node-controller  Node ha-450500 event: Registered Node ha-450500 in Controller
	
	
	Name:               ha-450500-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-450500-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=28b3ce799b018a38b7c40f89b465976263272e76
	                    minikube.k8s.io/name=ha-450500
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_03_17T11_12_11_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Mar 2025 11:12:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-450500-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Mar 2025 11:34:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Mar 2025 11:34:20 +0000   Mon, 17 Mar 2025 11:12:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Mar 2025 11:34:20 +0000   Mon, 17 Mar 2025 11:12:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Mar 2025 11:34:20 +0000   Mon, 17 Mar 2025 11:12:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Mar 2025 11:34:20 +0000   Mon, 17 Mar 2025 11:12:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.25.21.189
	  Hostname:    ha-450500-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 28f48d510cbb410a972a113bd4575506
	  System UUID:                ae8a8a30-dedf-2944-aa61-0f3914deab55
	  Boot ID:                    4884bf50-6613-47a2-85ee-7ff2ee44b27c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-9977c                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 etcd-ha-450500-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         23m
	  kube-system                 kindnet-ch8f7                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      23m
	  kube-system                 kube-apiserver-ha-450500-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-controller-manager-ha-450500-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-proxy-fthkw                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-scheduler-ha-450500-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-vip-ha-450500-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23m                kube-proxy       
	  Normal  NodeHasSufficientMemory  23m (x8 over 23m)  kubelet          Node ha-450500-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m (x8 over 23m)  kubelet          Node ha-450500-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m (x7 over 23m)  kubelet          Node ha-450500-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           23m                node-controller  Node ha-450500-m02 event: Registered Node ha-450500-m02 in Controller
	  Normal  RegisteredNode           23m                node-controller  Node ha-450500-m02 event: Registered Node ha-450500-m02 in Controller
	  Normal  RegisteredNode           19m                node-controller  Node ha-450500-m02 event: Registered Node ha-450500-m02 in Controller
	
	
	Name:               ha-450500-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-450500-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=28b3ce799b018a38b7c40f89b465976263272e76
	                    minikube.k8s.io/name=ha-450500
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_03_17T11_16_10_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Mar 2025 11:16:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-450500-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Mar 2025 11:35:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Mar 2025 11:32:32 +0000   Mon, 17 Mar 2025 11:16:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Mar 2025 11:32:32 +0000   Mon, 17 Mar 2025 11:16:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Mar 2025 11:32:32 +0000   Mon, 17 Mar 2025 11:16:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Mar 2025 11:32:32 +0000   Mon, 17 Mar 2025 11:16:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.25.19.102
	  Hostname:    ha-450500-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 a954376ce9cd4e58ba8799d1ee27e53b
	  System UUID:                0fd182b4-ce08-b24b-a567-bff91ecedb7d
	  Boot ID:                    19342931-4b92-45ba-8dae-537376f25bbf
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-xlpx5                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 etcd-ha-450500-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         19m
	  kube-system                 kindnet-94r58                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      19m
	  kube-system                 kube-apiserver-ha-450500-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-ha-450500-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-ktktm                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-ha-450500-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-vip-ha-450500-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node ha-450500-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node ha-450500-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node ha-450500-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                node-controller  Node ha-450500-m03 event: Registered Node ha-450500-m03 in Controller
	  Normal  RegisteredNode           19m                node-controller  Node ha-450500-m03 event: Registered Node ha-450500-m03 in Controller
	  Normal  RegisteredNode           19m                node-controller  Node ha-450500-m03 event: Registered Node ha-450500-m03 in Controller
	
	
	Name:               ha-450500-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-450500-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=28b3ce799b018a38b7c40f89b465976263272e76
	                    minikube.k8s.io/name=ha-450500
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_03_17T11_21_38_0700
	                    minikube.k8s.io/version=v1.35.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Mar 2025 11:21:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-450500-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Mar 2025 11:35:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Mar 2025 11:31:18 +0000   Mon, 17 Mar 2025 11:21:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Mar 2025 11:31:18 +0000   Mon, 17 Mar 2025 11:21:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Mar 2025 11:31:18 +0000   Mon, 17 Mar 2025 11:21:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Mar 2025 11:31:18 +0000   Mon, 17 Mar 2025 11:22:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.25.26.250
	  Hostname:    ha-450500-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 6603b008f9584585a56aded1a26c3c8a
	  System UUID:                0ef00259-6bd6-9345-8f4a-b7e5969d26c8
	  Boot ID:                    1c5e5b08-1fad-4fda-b3ed-fbf3574da764
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-kxm2c       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      14m
	  kube-system                 kube-proxy-hnm64    0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m (x2 over 14m)  kubelet          Node ha-450500-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x2 over 14m)  kubelet          Node ha-450500-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x2 over 14m)  kubelet          Node ha-450500-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                node-controller  Node ha-450500-m04 event: Registered Node ha-450500-m04 in Controller
	  Normal  RegisteredNode           13m                node-controller  Node ha-450500-m04 event: Registered Node ha-450500-m04 in Controller
	  Normal  RegisteredNode           13m                node-controller  Node ha-450500-m04 event: Registered Node ha-450500-m04 in Controller
	  Normal  NodeReady                13m                kubelet          Node ha-450500-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +7.261044] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Mar17 11:07] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +0.170471] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[ +31.523484] systemd-fstab-generator[1013]: Ignoring "noauto" option for root device
	[  +0.111714] kauditd_printk_skb: 69 callbacks suppressed
	[  +0.601464] systemd-fstab-generator[1052]: Ignoring "noauto" option for root device
	[  +0.193596] systemd-fstab-generator[1064]: Ignoring "noauto" option for root device
	[  +0.213073] systemd-fstab-generator[1078]: Ignoring "noauto" option for root device
	[  +2.889581] systemd-fstab-generator[1301]: Ignoring "noauto" option for root device
	[  +0.200551] systemd-fstab-generator[1313]: Ignoring "noauto" option for root device
	[  +0.197341] systemd-fstab-generator[1325]: Ignoring "noauto" option for root device
	[  +0.256376] systemd-fstab-generator[1340]: Ignoring "noauto" option for root device
	[Mar17 11:08] systemd-fstab-generator[1438]: Ignoring "noauto" option for root device
	[  +0.105755] kauditd_printk_skb: 206 callbacks suppressed
	[  +3.694607] systemd-fstab-generator[1708]: Ignoring "noauto" option for root device
	[  +6.788709] systemd-fstab-generator[1859]: Ignoring "noauto" option for root device
	[  +0.105591] kauditd_printk_skb: 74 callbacks suppressed
	[  +6.505315] kauditd_printk_skb: 67 callbacks suppressed
	[  +3.579854] systemd-fstab-generator[2388]: Ignoring "noauto" option for root device
	[  +5.997758] kauditd_printk_skb: 17 callbacks suppressed
	[  +8.240667] kauditd_printk_skb: 29 callbacks suppressed
	[Mar17 11:12] kauditd_printk_skb: 26 callbacks suppressed
	[Mar17 11:21] hrtimer: interrupt took 2228224 ns
	
	
	==> etcd [b11cf03bfdb6] <==
	{"level":"warn","ts":"2025-03-17T11:35:38.194250Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"329bf8d2aaab106a","from":"329bf8d2aaab106a","remote-peer-id":"2037854e1a7ecd38","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-03-17T11:35:38.201837Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"329bf8d2aaab106a","from":"329bf8d2aaab106a","remote-peer-id":"2037854e1a7ecd38","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-03-17T11:35:38.222941Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"329bf8d2aaab106a","from":"329bf8d2aaab106a","remote-peer-id":"2037854e1a7ecd38","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-03-17T11:35:38.302023Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"329bf8d2aaab106a","from":"329bf8d2aaab106a","remote-peer-id":"2037854e1a7ecd38","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-03-17T11:35:38.401427Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"329bf8d2aaab106a","from":"329bf8d2aaab106a","remote-peer-id":"2037854e1a7ecd38","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-03-17T11:35:38.544305Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"329bf8d2aaab106a","from":"329bf8d2aaab106a","remote-peer-id":"2037854e1a7ecd38","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-03-17T11:35:38.556686Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"329bf8d2aaab106a","from":"329bf8d2aaab106a","remote-peer-id":"2037854e1a7ecd38","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-03-17T11:35:38.565628Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"329bf8d2aaab106a","from":"329bf8d2aaab106a","remote-peer-id":"2037854e1a7ecd38","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-03-17T11:35:38.571498Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"329bf8d2aaab106a","from":"329bf8d2aaab106a","remote-peer-id":"2037854e1a7ecd38","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-03-17T11:35:38.581272Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"329bf8d2aaab106a","from":"329bf8d2aaab106a","remote-peer-id":"2037854e1a7ecd38","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-03-17T11:35:38.592595Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"329bf8d2aaab106a","from":"329bf8d2aaab106a","remote-peer-id":"2037854e1a7ecd38","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-03-17T11:35:38.601060Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"329bf8d2aaab106a","from":"329bf8d2aaab106a","remote-peer-id":"2037854e1a7ecd38","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-03-17T11:35:38.601608Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"329bf8d2aaab106a","from":"329bf8d2aaab106a","remote-peer-id":"2037854e1a7ecd38","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-03-17T11:35:38.610302Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"329bf8d2aaab106a","from":"329bf8d2aaab106a","remote-peer-id":"2037854e1a7ecd38","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-03-17T11:35:38.615467Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"329bf8d2aaab106a","from":"329bf8d2aaab106a","remote-peer-id":"2037854e1a7ecd38","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-03-17T11:35:38.620572Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"329bf8d2aaab106a","from":"329bf8d2aaab106a","remote-peer-id":"2037854e1a7ecd38","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-03-17T11:35:38.626145Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"329bf8d2aaab106a","from":"329bf8d2aaab106a","remote-peer-id":"2037854e1a7ecd38","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-03-17T11:35:38.634630Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"329bf8d2aaab106a","from":"329bf8d2aaab106a","remote-peer-id":"2037854e1a7ecd38","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-03-17T11:35:38.643130Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"329bf8d2aaab106a","from":"329bf8d2aaab106a","remote-peer-id":"2037854e1a7ecd38","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-03-17T11:35:38.649739Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"329bf8d2aaab106a","from":"329bf8d2aaab106a","remote-peer-id":"2037854e1a7ecd38","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-03-17T11:35:38.654203Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"329bf8d2aaab106a","from":"329bf8d2aaab106a","remote-peer-id":"2037854e1a7ecd38","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-03-17T11:35:38.658311Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"329bf8d2aaab106a","from":"329bf8d2aaab106a","remote-peer-id":"2037854e1a7ecd38","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-03-17T11:35:38.666245Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"329bf8d2aaab106a","from":"329bf8d2aaab106a","remote-peer-id":"2037854e1a7ecd38","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-03-17T11:35:38.677668Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"329bf8d2aaab106a","from":"329bf8d2aaab106a","remote-peer-id":"2037854e1a7ecd38","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-03-17T11:35:38.702037Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"329bf8d2aaab106a","from":"329bf8d2aaab106a","remote-peer-id":"2037854e1a7ecd38","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 11:35:38 up 29 min,  0 users,  load average: 0.25, 0.55, 0.54
	Linux ha-450500 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [f00705dba2c6] <==
	I0317 11:35:00.821794       1 main.go:324] Node ha-450500-m04 has CIDR [10.244.3.0/24] 
	I0317 11:35:10.822480       1 main.go:297] Handling node with IPs: map[172.25.16.34:{}]
	I0317 11:35:10.822576       1 main.go:301] handling current node
	I0317 11:35:10.822596       1 main.go:297] Handling node with IPs: map[172.25.21.189:{}]
	I0317 11:35:10.822604       1 main.go:324] Node ha-450500-m02 has CIDR [10.244.1.0/24] 
	I0317 11:35:10.822845       1 main.go:297] Handling node with IPs: map[172.25.19.102:{}]
	I0317 11:35:10.823056       1 main.go:324] Node ha-450500-m03 has CIDR [10.244.2.0/24] 
	I0317 11:35:10.823700       1 main.go:297] Handling node with IPs: map[172.25.26.250:{}]
	I0317 11:35:10.823734       1 main.go:324] Node ha-450500-m04 has CIDR [10.244.3.0/24] 
	I0317 11:35:20.813156       1 main.go:297] Handling node with IPs: map[172.25.16.34:{}]
	I0317 11:35:20.813264       1 main.go:301] handling current node
	I0317 11:35:20.813314       1 main.go:297] Handling node with IPs: map[172.25.21.189:{}]
	I0317 11:35:20.813337       1 main.go:324] Node ha-450500-m02 has CIDR [10.244.1.0/24] 
	I0317 11:35:20.814268       1 main.go:297] Handling node with IPs: map[172.25.19.102:{}]
	I0317 11:35:20.814372       1 main.go:324] Node ha-450500-m03 has CIDR [10.244.2.0/24] 
	I0317 11:35:20.814787       1 main.go:297] Handling node with IPs: map[172.25.26.250:{}]
	I0317 11:35:20.814876       1 main.go:324] Node ha-450500-m04 has CIDR [10.244.3.0/24] 
	I0317 11:35:30.813132       1 main.go:297] Handling node with IPs: map[172.25.16.34:{}]
	I0317 11:35:30.813298       1 main.go:301] handling current node
	I0317 11:35:30.813317       1 main.go:297] Handling node with IPs: map[172.25.21.189:{}]
	I0317 11:35:30.813324       1 main.go:324] Node ha-450500-m02 has CIDR [10.244.1.0/24] 
	I0317 11:35:30.813821       1 main.go:297] Handling node with IPs: map[172.25.19.102:{}]
	I0317 11:35:30.814036       1 main.go:324] Node ha-450500-m03 has CIDR [10.244.2.0/24] 
	I0317 11:35:30.814483       1 main.go:297] Handling node with IPs: map[172.25.26.250:{}]
	I0317 11:35:30.814510       1 main.go:324] Node ha-450500-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [b3f198d2c66e] <==
	I0317 11:08:25.903393       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0317 11:08:25.949194       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0317 11:08:25.978408       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0317 11:08:30.491739       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0317 11:08:30.678800       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0317 11:16:03.540468       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 10.7µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E0317 11:16:03.540828       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0317 11:16:03.542379       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0317 11:16:03.543836       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0317 11:16:03.567276       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="44.528619ms" method="PATCH" path="/api/v1/namespaces/default/events/ha-450500-m03.182d92e98a85f536" result=null
	E0317 11:17:24.192109       1 conn.go:339] Error on socket receive: read tcp 172.25.31.254:8443->172.25.16.1:54353: use of closed network connection
	E0317 11:17:24.757625       1 conn.go:339] Error on socket receive: read tcp 172.25.31.254:8443->172.25.16.1:54355: use of closed network connection
	E0317 11:17:25.368602       1 conn.go:339] Error on socket receive: read tcp 172.25.31.254:8443->172.25.16.1:54357: use of closed network connection
	E0317 11:17:25.932074       1 conn.go:339] Error on socket receive: read tcp 172.25.31.254:8443->172.25.16.1:54359: use of closed network connection
	E0317 11:17:26.467799       1 conn.go:339] Error on socket receive: read tcp 172.25.31.254:8443->172.25.16.1:54361: use of closed network connection
	E0317 11:17:27.098210       1 conn.go:339] Error on socket receive: read tcp 172.25.31.254:8443->172.25.16.1:54363: use of closed network connection
	E0317 11:17:27.607909       1 conn.go:339] Error on socket receive: read tcp 172.25.31.254:8443->172.25.16.1:54365: use of closed network connection
	E0317 11:17:28.103624       1 conn.go:339] Error on socket receive: read tcp 172.25.31.254:8443->172.25.16.1:54367: use of closed network connection
	E0317 11:17:28.626585       1 conn.go:339] Error on socket receive: read tcp 172.25.31.254:8443->172.25.16.1:54369: use of closed network connection
	E0317 11:17:29.533713       1 conn.go:339] Error on socket receive: read tcp 172.25.31.254:8443->172.25.16.1:54372: use of closed network connection
	E0317 11:17:40.041383       1 conn.go:339] Error on socket receive: read tcp 172.25.31.254:8443->172.25.16.1:54374: use of closed network connection
	E0317 11:17:40.557259       1 conn.go:339] Error on socket receive: read tcp 172.25.31.254:8443->172.25.16.1:54377: use of closed network connection
	E0317 11:17:51.028903       1 conn.go:339] Error on socket receive: read tcp 172.25.31.254:8443->172.25.16.1:54379: use of closed network connection
	E0317 11:17:51.538092       1 conn.go:339] Error on socket receive: read tcp 172.25.31.254:8443->172.25.16.1:54382: use of closed network connection
	E0317 11:18:02.030226       1 conn.go:339] Error on socket receive: read tcp 172.25.31.254:8443->172.25.16.1:54384: use of closed network connection
	
	
	==> kube-controller-manager [c94d28127c40] <==
	I0317 11:21:38.382651       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-450500-m04"
	I0317 11:21:39.100715       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-450500-m04"
	I0317 11:21:39.172516       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-450500-m04"
	I0317 11:21:40.171774       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-450500-m04"
	I0317 11:21:40.172149       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-450500-m04"
	I0317 11:21:40.206075       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-450500-m04"
	I0317 11:21:41.779830       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-450500-m04"
	I0317 11:21:41.912294       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-450500-m04"
	I0317 11:21:48.334150       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-450500-m04"
	I0317 11:21:52.262872       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-450500"
	I0317 11:22:08.856706       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-450500-m04"
	I0317 11:22:11.458364       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-450500-m04"
	I0317 11:22:11.464609       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-450500-m04"
	I0317 11:22:11.489798       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-450500-m04"
	I0317 11:22:11.821321       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-450500-m04"
	I0317 11:22:20.577855       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-450500-m03"
	I0317 11:24:10.375495       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-450500-m02"
	I0317 11:26:13.683563       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-450500-m04"
	I0317 11:26:57.800459       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-450500"
	I0317 11:27:26.240038       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-450500-m03"
	I0317 11:29:14.954087       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-450500-m02"
	I0317 11:31:18.220779       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-450500-m04"
	I0317 11:32:03.739717       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-450500"
	I0317 11:32:32.614352       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-450500-m03"
	I0317 11:34:20.750443       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-450500-m02"
	
	
	==> kube-proxy [fe97a5e85c40] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0317 11:08:32.506830       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0317 11:08:32.553358       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["172.25.16.34"]
	E0317 11:08:32.554355       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0317 11:08:32.632280       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0317 11:08:32.632439       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0317 11:08:32.632491       1 server_linux.go:170] "Using iptables Proxier"
	I0317 11:08:32.638138       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0317 11:08:32.641166       1 server.go:497] "Version info" version="v1.32.2"
	I0317 11:08:32.641324       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0317 11:08:32.647838       1 config.go:329] "Starting node config controller"
	I0317 11:08:32.649156       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0317 11:08:32.651526       1 config.go:199] "Starting service config controller"
	I0317 11:08:32.651752       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0317 11:08:32.651928       1 config.go:105] "Starting endpoint slice config controller"
	I0317 11:08:32.652176       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0317 11:08:32.750078       1 shared_informer.go:320] Caches are synced for node config
	I0317 11:08:32.753359       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0317 11:08:32.753395       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [42fa7c58af32] <==
	W0317 11:08:23.578418       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0317 11:08:23.580087       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0317 11:08:23.634088       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0317 11:08:23.634386       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0317 11:08:23.816337       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0317 11:08:23.817157       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 11:08:23.823127       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0317 11:08:23.823421       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 11:08:23.845289       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0317 11:08:23.845337       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 11:08:23.933132       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0317 11:08:23.933400       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 11:08:23.977047       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0317 11:08:23.977261       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 11:08:24.004722       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0317 11:08:24.004765       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0317 11:08:25.923446       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0317 11:21:38.251532       1 framework.go:1316] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-hnm64\": pod kube-proxy-hnm64 is already assigned to node \"ha-450500-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-hnm64" node="ha-450500-m04"
	E0317 11:21:38.251585       1 framework.go:1316] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-x9zvv\": pod kindnet-x9zvv is already assigned to node \"ha-450500-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-x9zvv" node="ha-450500-m04"
	E0317 11:21:38.266118       1 schedule_one.go:359] "scheduler cache ForgetPod failed" err="pod 2961fdd6-2e0a-443c-84af-268e819a82da(kube-system/kube-proxy-hnm64) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-hnm64"
	E0317 11:21:38.266187       1 schedule_one.go:1058] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-hnm64\": pod kube-proxy-hnm64 is already assigned to node \"ha-450500-m04\"" pod="kube-system/kube-proxy-hnm64"
	E0317 11:21:38.266134       1 schedule_one.go:359] "scheduler cache ForgetPod failed" err="pod 9314321f-1ce7-4067-a29e-d55bc47bf326(kube-system/kindnet-x9zvv) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-x9zvv"
	E0317 11:21:38.267462       1 schedule_one.go:1058] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-x9zvv\": pod kindnet-x9zvv is already assigned to node \"ha-450500-m04\"" pod="kube-system/kindnet-x9zvv"
	I0317 11:21:38.266227       1 schedule_one.go:1071] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-hnm64" node="ha-450500-m04"
	I0317 11:21:38.267871       1 schedule_one.go:1071] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-x9zvv" node="ha-450500-m04"
	
	
	==> kubelet <==
	Mar 17 11:31:26 ha-450500 kubelet[2395]: E0317 11:31:26.159704    2395 iptables.go:577] "Could not set up iptables canary" err=<
	Mar 17 11:31:26 ha-450500 kubelet[2395]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 17 11:31:26 ha-450500 kubelet[2395]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 17 11:31:26 ha-450500 kubelet[2395]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 17 11:31:26 ha-450500 kubelet[2395]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 17 11:32:26 ha-450500 kubelet[2395]: E0317 11:32:26.156621    2395 iptables.go:577] "Could not set up iptables canary" err=<
	Mar 17 11:32:26 ha-450500 kubelet[2395]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 17 11:32:26 ha-450500 kubelet[2395]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 17 11:32:26 ha-450500 kubelet[2395]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 17 11:32:26 ha-450500 kubelet[2395]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 17 11:33:26 ha-450500 kubelet[2395]: E0317 11:33:26.156056    2395 iptables.go:577] "Could not set up iptables canary" err=<
	Mar 17 11:33:26 ha-450500 kubelet[2395]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 17 11:33:26 ha-450500 kubelet[2395]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 17 11:33:26 ha-450500 kubelet[2395]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 17 11:33:26 ha-450500 kubelet[2395]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 17 11:34:26 ha-450500 kubelet[2395]: E0317 11:34:26.154921    2395 iptables.go:577] "Could not set up iptables canary" err=<
	Mar 17 11:34:26 ha-450500 kubelet[2395]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 17 11:34:26 ha-450500 kubelet[2395]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 17 11:34:26 ha-450500 kubelet[2395]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 17 11:34:26 ha-450500 kubelet[2395]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 17 11:35:26 ha-450500 kubelet[2395]: E0317 11:35:26.156020    2395 iptables.go:577] "Could not set up iptables canary" err=<
	Mar 17 11:35:26 ha-450500 kubelet[2395]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 17 11:35:26 ha-450500 kubelet[2395]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 17 11:35:26 ha-450500 kubelet[2395]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 17 11:35:26 ha-450500 kubelet[2395]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-450500 -n ha-450500
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-450500 -n ha-450500: (12.49882s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-450500 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (65.76s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (58.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-781100 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-781100 -- exec busybox-58667487b6-kvm5b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-781100 -- exec busybox-58667487b6-kvm5b -- sh -c "ping -c 1 172.25.16.1"
E0317 12:13:37.859195    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-331000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-781100 -- exec busybox-58667487b6-kvm5b -- sh -c "ping -c 1 172.25.16.1": exit status 1 (10.4588113s)

                                                
                                                
-- stdout --
	PING 172.25.16.1 (172.25.16.1): 56 data bytes
	
	--- 172.25.16.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.25.16.1) from pod (busybox-58667487b6-kvm5b): exit status 1
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-781100 -- exec busybox-58667487b6-vnkbn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-781100 -- exec busybox-58667487b6-vnkbn -- sh -c "ping -c 1 172.25.16.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-781100 -- exec busybox-58667487b6-vnkbn -- sh -c "ping -c 1 172.25.16.1": exit status 1 (10.4507008s)

                                                
                                                
-- stdout --
	PING 172.25.16.1 (172.25.16.1): 56 data bytes
	
	--- 172.25.16.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.25.16.1) from pod (busybox-58667487b6-vnkbn): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-781100 -n multinode-781100
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-781100 -n multinode-781100: (12.1623949s)
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-781100 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-781100 logs -n 25: (9.0214447s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-803900 ssh -- ls                    | mount-start-2-803900 | minikube6\jenkins | v1.35.0 | 17 Mar 25 12:01 UTC | 17 Mar 25 12:02 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-1-720200                           | mount-start-1-720200 | minikube6\jenkins | v1.35.0 | 17 Mar 25 12:02 UTC | 17 Mar 25 12:02 UTC |
	|         | --alsologtostderr -v=5                            |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-803900 ssh -- ls                    | mount-start-2-803900 | minikube6\jenkins | v1.35.0 | 17 Mar 25 12:02 UTC | 17 Mar 25 12:02 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| stop    | -p mount-start-2-803900                           | mount-start-2-803900 | minikube6\jenkins | v1.35.0 | 17 Mar 25 12:02 UTC | 17 Mar 25 12:03 UTC |
	| start   | -p mount-start-2-803900                           | mount-start-2-803900 | minikube6\jenkins | v1.35.0 | 17 Mar 25 12:03 UTC | 17 Mar 25 12:05 UTC |
	| mount   | C:\Users\jenkins.minikube6:/minikube-host         | mount-start-2-803900 | minikube6\jenkins | v1.35.0 | 17 Mar 25 12:05 UTC |                     |
	|         | --profile mount-start-2-803900 --v 0              |                      |                   |         |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip                |                      |                   |         |                     |                     |
	|         | --msize 6543 --port 46465 --type 9p --uid         |                      |                   |         |                     |                     |
	|         |                                                 0 |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-803900 ssh -- ls                    | mount-start-2-803900 | minikube6\jenkins | v1.35.0 | 17 Mar 25 12:05 UTC | 17 Mar 25 12:05 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-2-803900                           | mount-start-2-803900 | minikube6\jenkins | v1.35.0 | 17 Mar 25 12:05 UTC | 17 Mar 25 12:05 UTC |
	| delete  | -p mount-start-1-720200                           | mount-start-1-720200 | minikube6\jenkins | v1.35.0 | 17 Mar 25 12:05 UTC | 17 Mar 25 12:05 UTC |
	| start   | -p multinode-781100                               | multinode-781100     | minikube6\jenkins | v1.35.0 | 17 Mar 25 12:05 UTC | 17 Mar 25 12:13 UTC |
	|         | --wait=true --memory=2200                         |                      |                   |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |         |                     |                     |
	| kubectl | -p multinode-781100 -- apply -f                   | multinode-781100     | minikube6\jenkins | v1.35.0 | 17 Mar 25 12:13 UTC | 17 Mar 25 12:13 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |                   |         |                     |                     |
	| kubectl | -p multinode-781100 -- rollout                    | multinode-781100     | minikube6\jenkins | v1.35.0 | 17 Mar 25 12:13 UTC | 17 Mar 25 12:13 UTC |
	|         | status deployment/busybox                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-781100 -- get pods -o                | multinode-781100     | minikube6\jenkins | v1.35.0 | 17 Mar 25 12:13 UTC | 17 Mar 25 12:13 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-781100 -- get pods -o                | multinode-781100     | minikube6\jenkins | v1.35.0 | 17 Mar 25 12:13 UTC | 17 Mar 25 12:13 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-781100 -- exec                       | multinode-781100     | minikube6\jenkins | v1.35.0 | 17 Mar 25 12:13 UTC | 17 Mar 25 12:13 UTC |
	|         | busybox-58667487b6-kvm5b --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-781100 -- exec                       | multinode-781100     | minikube6\jenkins | v1.35.0 | 17 Mar 25 12:13 UTC | 17 Mar 25 12:13 UTC |
	|         | busybox-58667487b6-vnkbn --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-781100 -- exec                       | multinode-781100     | minikube6\jenkins | v1.35.0 | 17 Mar 25 12:13 UTC | 17 Mar 25 12:13 UTC |
	|         | busybox-58667487b6-kvm5b --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-781100 -- exec                       | multinode-781100     | minikube6\jenkins | v1.35.0 | 17 Mar 25 12:13 UTC | 17 Mar 25 12:13 UTC |
	|         | busybox-58667487b6-vnkbn --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-781100 -- exec                       | multinode-781100     | minikube6\jenkins | v1.35.0 | 17 Mar 25 12:13 UTC | 17 Mar 25 12:13 UTC |
	|         | busybox-58667487b6-kvm5b -- nslookup              |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-781100 -- exec                       | multinode-781100     | minikube6\jenkins | v1.35.0 | 17 Mar 25 12:13 UTC | 17 Mar 25 12:13 UTC |
	|         | busybox-58667487b6-vnkbn -- nslookup              |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-781100 -- get pods -o                | multinode-781100     | minikube6\jenkins | v1.35.0 | 17 Mar 25 12:13 UTC | 17 Mar 25 12:13 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-781100 -- exec                       | multinode-781100     | minikube6\jenkins | v1.35.0 | 17 Mar 25 12:13 UTC | 17 Mar 25 12:13 UTC |
	|         | busybox-58667487b6-kvm5b                          |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-781100 -- exec                       | multinode-781100     | minikube6\jenkins | v1.35.0 | 17 Mar 25 12:13 UTC |                     |
	|         | busybox-58667487b6-kvm5b -- sh                    |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.25.16.1                          |                      |                   |         |                     |                     |
	| kubectl | -p multinode-781100 -- exec                       | multinode-781100     | minikube6\jenkins | v1.35.0 | 17 Mar 25 12:13 UTC | 17 Mar 25 12:13 UTC |
	|         | busybox-58667487b6-vnkbn                          |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-781100 -- exec                       | multinode-781100     | minikube6\jenkins | v1.35.0 | 17 Mar 25 12:13 UTC |                     |
	|         | busybox-58667487b6-vnkbn -- sh                    |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.25.16.1                          |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/03/17 12:05:58
	Running on machine: minikube6
	Binary: Built with gc go1.24.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0317 12:05:58.786126    9924 out.go:345] Setting OutFile to fd 1788 ...
	I0317 12:05:58.861032    9924 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 12:05:58.861566    9924 out.go:358] Setting ErrFile to fd 1280...
	I0317 12:05:58.861566    9924 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 12:05:58.878702    9924 out.go:352] Setting JSON to false
	I0317 12:05:58.881389    9924 start.go:129] hostinfo: {"hostname":"minikube6","uptime":6935,"bootTime":1742206223,"procs":179,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5608 Build 19045.5608","kernelVersion":"10.0.19045.5608 Build 19045.5608","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0317 12:05:58.881389    9924 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0317 12:05:58.887435    9924 out.go:177] * [multinode-781100] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5608 Build 19045.5608
	I0317 12:05:58.891399    9924 notify.go:220] Checking for updates...
	I0317 12:05:58.894367    9924 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0317 12:05:58.896671    9924 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0317 12:05:58.899384    9924 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0317 12:05:58.903029    9924 out.go:177]   - MINIKUBE_LOCATION=20535
	I0317 12:05:58.905928    9924 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0317 12:05:58.909876    9924 config.go:182] Loaded profile config "ha-450500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0317 12:05:58.911595    9924 driver.go:394] Setting default libvirt URI to qemu:///system
	I0317 12:06:04.160748    9924 out.go:177] * Using the hyperv driver based on user configuration
	I0317 12:06:04.166226    9924 start.go:297] selected driver: hyperv
	I0317 12:06:04.166226    9924 start.go:901] validating driver "hyperv" against <nil>
	I0317 12:06:04.166226    9924 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0317 12:06:04.215135    9924 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0317 12:06:04.217256    9924 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0317 12:06:04.217528    9924 cni.go:84] Creating CNI manager for ""
	I0317 12:06:04.217528    9924 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0317 12:06:04.217528    9924 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0317 12:06:04.217683    9924 start.go:340] cluster config:
	{Name:multinode-781100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:multinode-781100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 12:06:04.218200    9924 iso.go:125] acquiring lock: {Name:mk49322cc4182124f5e9cd1631076166a7ff463d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0317 12:06:04.223123    9924 out.go:177] * Starting "multinode-781100" primary control-plane node in "multinode-781100" cluster
	I0317 12:06:04.226492    9924 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0317 12:06:04.226755    9924 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4
	I0317 12:06:04.226755    9924 cache.go:56] Caching tarball of preloaded images
	I0317 12:06:04.226887    9924 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0317 12:06:04.227372    9924 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on docker
	I0317 12:06:04.227569    9924 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-781100\config.json ...
	I0317 12:06:04.227821    9924 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-781100\config.json: {Name:mkbf82510dd9341a5957edf721aedba2d7df37cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:06:04.228082    9924 start.go:360] acquireMachinesLock for multinode-781100: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0317 12:06:04.229150    9924 start.go:364] duration metric: took 1.0678ms to acquireMachinesLock for "multinode-781100"
	I0317 12:06:04.229150    9924 start.go:93] Provisioning new machine with config: &{Name:multinode-781100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 Clus
terName:multinode-781100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0317 12:06:04.229150    9924 start.go:125] createHost starting for "" (driver="hyperv")
	I0317 12:06:04.232062    9924 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0317 12:06:04.232655    9924 start.go:159] libmachine.API.Create for "multinode-781100" (driver="hyperv")
	I0317 12:06:04.232655    9924 client.go:168] LocalClient.Create starting
	I0317 12:06:04.232954    9924 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0317 12:06:04.233488    9924 main.go:141] libmachine: Decoding PEM data...
	I0317 12:06:04.233488    9924 main.go:141] libmachine: Parsing certificate...
	I0317 12:06:04.233488    9924 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0317 12:06:04.233488    9924 main.go:141] libmachine: Decoding PEM data...
	I0317 12:06:04.233488    9924 main.go:141] libmachine: Parsing certificate...
	I0317 12:06:04.233488    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0317 12:06:06.270705    9924 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0317 12:06:06.270705    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:06:06.271443    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0317 12:06:08.016864    9924 main.go:141] libmachine: [stdout =====>] : False
	
	I0317 12:06:08.017702    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:06:08.017833    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0317 12:06:09.498474    9924 main.go:141] libmachine: [stdout =====>] : True
	
	I0317 12:06:09.498474    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:06:09.498474    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0317 12:06:13.183388    9924 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0317 12:06:13.183388    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:06:13.186222    9924 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0317 12:06:13.690296    9924 main.go:141] libmachine: Creating SSH key...
	I0317 12:06:13.781412    9924 main.go:141] libmachine: Creating VM...
	I0317 12:06:13.782316    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0317 12:06:16.707585    9924 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0317 12:06:16.708511    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:06:16.708511    9924 main.go:141] libmachine: Using switch "Default Switch"
	I0317 12:06:16.708511    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0317 12:06:18.484489    9924 main.go:141] libmachine: [stdout =====>] : True
	
	I0317 12:06:18.485156    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:06:18.485254    9924 main.go:141] libmachine: Creating VHD
	I0317 12:06:18.485319    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-781100\fixed.vhd' -SizeBytes 10MB -Fixed
	I0317 12:06:22.237463    9924 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-781100\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 42AF4847-B77D-4246-A8E8-BBAA9E65695D
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0317 12:06:22.238080    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:06:22.238080    9924 main.go:141] libmachine: Writing magic tar header
	I0317 12:06:22.238157    9924 main.go:141] libmachine: Writing SSH key tar header
	I0317 12:06:22.250547    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-781100\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-781100\disk.vhd' -VHDType Dynamic -DeleteSource
	I0317 12:06:25.467055    9924 main.go:141] libmachine: [stdout =====>] : 
	I0317 12:06:25.467055    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:06:25.467970    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-781100\disk.vhd' -SizeBytes 20000MB
	I0317 12:06:28.080245    9924 main.go:141] libmachine: [stdout =====>] : 
	I0317 12:06:28.080245    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:06:28.081029    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-781100 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-781100' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0317 12:06:31.742462    9924 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-781100 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0317 12:06:31.742866    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:06:31.742941    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-781100 -DynamicMemoryEnabled $false
	I0317 12:06:34.048167    9924 main.go:141] libmachine: [stdout =====>] : 
	I0317 12:06:34.048167    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:06:34.048167    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-781100 -Count 2
	I0317 12:06:36.289983    9924 main.go:141] libmachine: [stdout =====>] : 
	I0317 12:06:36.289983    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:06:36.290397    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-781100 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-781100\boot2docker.iso'
	I0317 12:06:39.015440    9924 main.go:141] libmachine: [stdout =====>] : 
	I0317 12:06:39.016023    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:06:39.016115    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-781100 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-781100\disk.vhd'
	I0317 12:06:41.651701    9924 main.go:141] libmachine: [stdout =====>] : 
	I0317 12:06:41.651821    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:06:41.651821    9924 main.go:141] libmachine: Starting VM...
	I0317 12:06:41.651821    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-781100
	I0317 12:06:44.822322    9924 main.go:141] libmachine: [stdout =====>] : 
	I0317 12:06:44.822322    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:06:44.822322    9924 main.go:141] libmachine: Waiting for host to start...
	I0317 12:06:44.822322    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100 ).state
	I0317 12:06:47.096047    9924 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:06:47.096094    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:06:47.096094    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100 ).networkadapters[0]).ipaddresses[0]
	I0317 12:06:49.629651    9924 main.go:141] libmachine: [stdout =====>] : 
	I0317 12:06:49.629718    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:06:50.630884    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100 ).state
	I0317 12:06:52.860966    9924 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:06:52.860966    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:06:52.861473    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100 ).networkadapters[0]).ipaddresses[0]
	I0317 12:06:55.435604    9924 main.go:141] libmachine: [stdout =====>] : 
	I0317 12:06:55.436178    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:06:56.437207    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100 ).state
	I0317 12:06:58.622932    9924 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:06:58.622932    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:06:58.622932    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100 ).networkadapters[0]).ipaddresses[0]
	I0317 12:07:01.105993    9924 main.go:141] libmachine: [stdout =====>] : 
	I0317 12:07:01.106442    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:07:02.107552    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100 ).state
	I0317 12:07:04.362440    9924 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:07:04.363346    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:07:04.363527    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100 ).networkadapters[0]).ipaddresses[0]
	I0317 12:07:06.933098    9924 main.go:141] libmachine: [stdout =====>] : 
	I0317 12:07:06.933098    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:07:07.933894    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100 ).state
	I0317 12:07:10.221382    9924 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:07:10.222203    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:07:10.222263    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100 ).networkadapters[0]).ipaddresses[0]
	I0317 12:07:12.825288    9924 main.go:141] libmachine: [stdout =====>] : 172.25.16.124
	
	I0317 12:07:12.826058    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:07:12.826113    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100 ).state
	I0317 12:07:14.950269    9924 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:07:14.951270    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:07:14.951300    9924 machine.go:93] provisionDockerMachine start ...
	I0317 12:07:14.951368    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100 ).state
	I0317 12:07:17.139298    9924 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:07:17.139298    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:07:17.140278    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100 ).networkadapters[0]).ipaddresses[0]
	I0317 12:07:19.707571    9924 main.go:141] libmachine: [stdout =====>] : 172.25.16.124
	
	I0317 12:07:19.708598    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:07:19.715188    9924 main.go:141] libmachine: Using SSH client type: native
	I0317 12:07:19.730080    9924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.16.124 22 <nil> <nil>}
	I0317 12:07:19.730080    9924 main.go:141] libmachine: About to run SSH command:
	hostname
	I0317 12:07:19.851260    9924 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0317 12:07:19.851358    9924 buildroot.go:166] provisioning hostname "multinode-781100"
	I0317 12:07:19.851537    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100 ).state
	I0317 12:07:22.044321    9924 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:07:22.044321    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:07:22.044789    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100 ).networkadapters[0]).ipaddresses[0]
	I0317 12:07:24.625258    9924 main.go:141] libmachine: [stdout =====>] : 172.25.16.124
	
	I0317 12:07:24.626150    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:07:24.632446    9924 main.go:141] libmachine: Using SSH client type: native
	I0317 12:07:24.633048    9924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.16.124 22 <nil> <nil>}
	I0317 12:07:24.633270    9924 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-781100 && echo "multinode-781100" | sudo tee /etc/hostname
	I0317 12:07:24.779083    9924 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-781100
	
	I0317 12:07:24.779213    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100 ).state
	I0317 12:07:26.925397    9924 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:07:26.925397    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:07:26.926226    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100 ).networkadapters[0]).ipaddresses[0]
	I0317 12:07:29.502394    9924 main.go:141] libmachine: [stdout =====>] : 172.25.16.124
	
	I0317 12:07:29.502818    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:07:29.508608    9924 main.go:141] libmachine: Using SSH client type: native
	I0317 12:07:29.508664    9924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.16.124 22 <nil> <nil>}
	I0317 12:07:29.508664    9924 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-781100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-781100/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-781100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0317 12:07:29.654913    9924 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0317 12:07:29.654913    9924 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0317 12:07:29.654913    9924 buildroot.go:174] setting up certificates
	I0317 12:07:29.654913    9924 provision.go:84] configureAuth start
	I0317 12:07:29.654913    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100 ).state
	I0317 12:07:31.792093    9924 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:07:31.792093    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:07:31.792661    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100 ).networkadapters[0]).ipaddresses[0]
	I0317 12:07:34.341723    9924 main.go:141] libmachine: [stdout =====>] : 172.25.16.124
	
	I0317 12:07:34.341723    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:07:34.342396    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100 ).state
	I0317 12:07:36.492490    9924 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:07:36.492760    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:07:36.492760    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100 ).networkadapters[0]).ipaddresses[0]
	I0317 12:07:39.102471    9924 main.go:141] libmachine: [stdout =====>] : 172.25.16.124
	
	I0317 12:07:39.102726    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:07:39.103059    9924 provision.go:143] copyHostCerts
	I0317 12:07:39.103249    9924 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0317 12:07:39.103516    9924 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0317 12:07:39.103516    9924 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0317 12:07:39.104126    9924 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0317 12:07:39.105630    9924 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0317 12:07:39.105926    9924 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0317 12:07:39.105994    9924 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0317 12:07:39.106097    9924 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0317 12:07:39.107589    9924 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0317 12:07:39.107777    9924 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0317 12:07:39.107777    9924 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0317 12:07:39.107777    9924 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0317 12:07:39.109227    9924 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-781100 san=[127.0.0.1 172.25.16.124 localhost minikube multinode-781100]
	I0317 12:07:39.300195    9924 provision.go:177] copyRemoteCerts
	I0317 12:07:39.311203    9924 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0317 12:07:39.312237    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100 ).state
	I0317 12:07:41.445297    9924 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:07:41.445297    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:07:41.446368    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100 ).networkadapters[0]).ipaddresses[0]
	I0317 12:07:44.002633    9924 main.go:141] libmachine: [stdout =====>] : 172.25.16.124
	
	I0317 12:07:44.002887    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:07:44.003673    9924 sshutil.go:53] new ssh client: &{IP:172.25.16.124 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-781100\id_rsa Username:docker}
	I0317 12:07:44.115243    9924 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8039916s)
	I0317 12:07:44.115243    9924 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0317 12:07:44.116196    9924 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0317 12:07:44.163536    9924 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0317 12:07:44.163536    9924 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0317 12:07:44.214595    9924 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0317 12:07:44.214595    9924 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0317 12:07:44.270972    9924 provision.go:87] duration metric: took 14.615847s to configureAuth
	I0317 12:07:44.271005    9924 buildroot.go:189] setting minikube options for container-runtime
	I0317 12:07:44.271932    9924 config.go:182] Loaded profile config "multinode-781100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0317 12:07:44.272056    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100 ).state
	I0317 12:07:46.469129    9924 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:07:46.469403    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:07:46.469503    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100 ).networkadapters[0]).ipaddresses[0]
	I0317 12:07:49.086567    9924 main.go:141] libmachine: [stdout =====>] : 172.25.16.124
	
	I0317 12:07:49.087546    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:07:49.094466    9924 main.go:141] libmachine: Using SSH client type: native
	I0317 12:07:49.094702    9924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.16.124 22 <nil> <nil>}
	I0317 12:07:49.094702    9924 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0317 12:07:49.225285    9924 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0317 12:07:49.225285    9924 buildroot.go:70] root file system type: tmpfs
	I0317 12:07:49.225583    9924 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0317 12:07:49.225648    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100 ).state
	I0317 12:07:51.428667    9924 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:07:51.428837    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:07:51.428837    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100 ).networkadapters[0]).ipaddresses[0]
	I0317 12:07:53.964223    9924 main.go:141] libmachine: [stdout =====>] : 172.25.16.124
	
	I0317 12:07:53.964223    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:07:53.970600    9924 main.go:141] libmachine: Using SSH client type: native
	I0317 12:07:53.971581    9924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.16.124 22 <nil> <nil>}
	I0317 12:07:53.971581    9924 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0317 12:07:54.127392    9924 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0317 12:07:54.127534    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100 ).state
	I0317 12:07:56.319026    9924 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:07:56.320048    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:07:56.320048    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100 ).networkadapters[0]).ipaddresses[0]
	I0317 12:07:58.895901    9924 main.go:141] libmachine: [stdout =====>] : 172.25.16.124
	
	I0317 12:07:58.895996    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:07:58.902563    9924 main.go:141] libmachine: Using SSH client type: native
	I0317 12:07:58.903449    9924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.16.124 22 <nil> <nil>}
	I0317 12:07:58.903449    9924 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0317 12:08:01.177901    9924 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0317 12:08:01.177987    9924 machine.go:96] duration metric: took 46.2262208s to provisionDockerMachine
	I0317 12:08:01.177987    9924 client.go:171] duration metric: took 1m56.9441516s to LocalClient.Create
	I0317 12:08:01.177987    9924 start.go:167] duration metric: took 1m56.9441516s to libmachine.API.Create "multinode-781100"
	I0317 12:08:01.177987    9924 start.go:293] postStartSetup for "multinode-781100" (driver="hyperv")
	I0317 12:08:01.177987    9924 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0317 12:08:01.190586    9924 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0317 12:08:01.190586    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100 ).state
	I0317 12:08:03.339236    9924 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:08:03.339236    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:08:03.340316    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100 ).networkadapters[0]).ipaddresses[0]
	I0317 12:08:05.892761    9924 main.go:141] libmachine: [stdout =====>] : 172.25.16.124
	
	I0317 12:08:05.892914    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:08:05.892914    9924 sshutil.go:53] new ssh client: &{IP:172.25.16.124 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-781100\id_rsa Username:docker}
	I0317 12:08:06.000176    9924 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8095415s)
	I0317 12:08:06.010110    9924 ssh_runner.go:195] Run: cat /etc/os-release
	I0317 12:08:06.018007    9924 command_runner.go:130] > NAME=Buildroot
	I0317 12:08:06.018007    9924 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0317 12:08:06.018007    9924 command_runner.go:130] > ID=buildroot
	I0317 12:08:06.018007    9924 command_runner.go:130] > VERSION_ID=2023.02.9
	I0317 12:08:06.018007    9924 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0317 12:08:06.018007    9924 info.go:137] Remote host: Buildroot 2023.02.9
	I0317 12:08:06.018007    9924 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0317 12:08:06.018789    9924 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0317 12:08:06.019508    9924 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem -> 89402.pem in /etc/ssl/certs
	I0317 12:08:06.019508    9924 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem -> /etc/ssl/certs/89402.pem
	I0317 12:08:06.034646    9924 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0317 12:08:06.058119    9924 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem --> /etc/ssl/certs/89402.pem (1708 bytes)
	I0317 12:08:06.104096    9924 start.go:296] duration metric: took 4.9260591s for postStartSetup
	I0317 12:08:06.108201    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100 ).state
	I0317 12:08:08.289226    9924 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:08:08.290027    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:08:08.290027    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100 ).networkadapters[0]).ipaddresses[0]
	I0317 12:08:10.844284    9924 main.go:141] libmachine: [stdout =====>] : 172.25.16.124
	
	I0317 12:08:10.845284    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:08:10.845334    9924 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-781100\config.json ...
	I0317 12:08:10.848691    9924 start.go:128] duration metric: took 2m6.6182621s to createHost
	I0317 12:08:10.848691    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100 ).state
	I0317 12:08:13.040167    9924 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:08:13.040344    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:08:13.040499    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100 ).networkadapters[0]).ipaddresses[0]
	I0317 12:08:15.672376    9924 main.go:141] libmachine: [stdout =====>] : 172.25.16.124
	
	I0317 12:08:15.673412    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:08:15.678295    9924 main.go:141] libmachine: Using SSH client type: native
	I0317 12:08:15.679209    9924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.16.124 22 <nil> <nil>}
	I0317 12:08:15.679209    9924 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0317 12:08:15.810069    9924 main.go:141] libmachine: SSH cmd err, output: <nil>: 1742213295.836185619
	
	I0317 12:08:15.810069    9924 fix.go:216] guest clock: 1742213295.836185619
	I0317 12:08:15.810069    9924 fix.go:229] Guest: 2025-03-17 12:08:15.836185619 +0000 UTC Remote: 2025-03-17 12:08:10.8486914 +0000 UTC m=+132.143254301 (delta=4.987494219s)
	I0317 12:08:15.810289    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100 ).state
	I0317 12:08:18.028965    9924 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:08:18.029030    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:08:18.029208    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100 ).networkadapters[0]).ipaddresses[0]
	I0317 12:08:20.597910    9924 main.go:141] libmachine: [stdout =====>] : 172.25.16.124
	
	I0317 12:08:20.597910    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:08:20.604202    9924 main.go:141] libmachine: Using SSH client type: native
	I0317 12:08:20.604836    9924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.16.124 22 <nil> <nil>}
	I0317 12:08:20.604836    9924 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1742213295
	I0317 12:08:20.743691    9924 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Mar 17 12:08:15 UTC 2025
	
	I0317 12:08:20.743786    9924 fix.go:236] clock set: Mon Mar 17 12:08:15 UTC 2025
	 (err=<nil>)
	I0317 12:08:20.743815    9924 start.go:83] releasing machines lock for "multinode-781100", held for 2m16.5132857s
	I0317 12:08:20.743815    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100 ).state
	I0317 12:08:22.929927    9924 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:08:22.929927    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:08:22.930590    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100 ).networkadapters[0]).ipaddresses[0]
	I0317 12:08:25.478995    9924 main.go:141] libmachine: [stdout =====>] : 172.25.16.124
	
	I0317 12:08:25.480028    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:08:25.485656    9924 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0317 12:08:25.485812    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100 ).state
	I0317 12:08:25.495377    9924 ssh_runner.go:195] Run: cat /version.json
	I0317 12:08:25.495377    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100 ).state
	I0317 12:08:27.735757    9924 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:08:27.736444    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:08:27.736444    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100 ).networkadapters[0]).ipaddresses[0]
	I0317 12:08:27.751446    9924 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:08:27.751446    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:08:27.751446    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100 ).networkadapters[0]).ipaddresses[0]
	I0317 12:08:30.380546    9924 main.go:141] libmachine: [stdout =====>] : 172.25.16.124
	
	I0317 12:08:30.380546    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:08:30.380546    9924 sshutil.go:53] new ssh client: &{IP:172.25.16.124 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-781100\id_rsa Username:docker}
	I0317 12:08:30.416406    9924 main.go:141] libmachine: [stdout =====>] : 172.25.16.124
	
	I0317 12:08:30.416406    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:08:30.416406    9924 sshutil.go:53] new ssh client: &{IP:172.25.16.124 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-781100\id_rsa Username:docker}
	I0317 12:08:30.482353    9924 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0317 12:08:30.483558    9924 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.9978514s)
	W0317 12:08:30.483654    9924 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0317 12:08:30.515707    9924 command_runner.go:130] > {"iso_version": "v1.35.0", "kicbase_version": "v0.0.45-1736763277-20236", "minikube_version": "v1.35.0", "commit": "3fb24bd87c8c8761e2515e1a9ee13835a389ed68"}
	I0317 12:08:30.515707    9924 ssh_runner.go:235] Completed: cat /version.json: (5.0202792s)
	I0317 12:08:30.527527    9924 ssh_runner.go:195] Run: systemctl --version
	I0317 12:08:30.537893    9924 command_runner.go:130] > systemd 252 (252)
	I0317 12:08:30.537962    9924 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0317 12:08:30.548440    9924 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0317 12:08:30.556725    9924 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0317 12:08:30.557884    9924 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0317 12:08:30.568364    9924 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0317 12:08:30.598152    9924 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0317 12:08:30.598152    9924 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0317 12:08:30.598323    9924 start.go:495] detecting cgroup driver to use...
	I0317 12:08:30.598590    9924 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0317 12:08:30.633283    9924 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0317 12:08:30.644783    9924 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	W0317 12:08:30.663993    9924 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0317 12:08:30.663993    9924 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0317 12:08:30.676805    9924 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0317 12:08:30.694907    9924 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0317 12:08:30.705260    9924 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0317 12:08:30.736588    9924 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0317 12:08:30.765605    9924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0317 12:08:30.798979    9924 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0317 12:08:30.831482    9924 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0317 12:08:30.859396    9924 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0317 12:08:30.892295    9924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0317 12:08:30.923655    9924 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0317 12:08:30.954103    9924 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0317 12:08:30.972906    9924 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0317 12:08:30.973599    9924 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0317 12:08:30.985727    9924 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0317 12:08:31.020808    9924 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0317 12:08:31.048093    9924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 12:08:31.261235    9924 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0317 12:08:31.294244    9924 start.go:495] detecting cgroup driver to use...
	I0317 12:08:31.306669    9924 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0317 12:08:31.329778    9924 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0317 12:08:31.330752    9924 command_runner.go:130] > [Unit]
	I0317 12:08:31.330752    9924 command_runner.go:130] > Description=Docker Application Container Engine
	I0317 12:08:31.330752    9924 command_runner.go:130] > Documentation=https://docs.docker.com
	I0317 12:08:31.330752    9924 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0317 12:08:31.330752    9924 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0317 12:08:31.330752    9924 command_runner.go:130] > StartLimitBurst=3
	I0317 12:08:31.330752    9924 command_runner.go:130] > StartLimitIntervalSec=60
	I0317 12:08:31.330752    9924 command_runner.go:130] > [Service]
	I0317 12:08:31.330752    9924 command_runner.go:130] > Type=notify
	I0317 12:08:31.330752    9924 command_runner.go:130] > Restart=on-failure
	I0317 12:08:31.330911    9924 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0317 12:08:31.330911    9924 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0317 12:08:31.330956    9924 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0317 12:08:31.330956    9924 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0317 12:08:31.330956    9924 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0317 12:08:31.331009    9924 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0317 12:08:31.331009    9924 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0317 12:08:31.331049    9924 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0317 12:08:31.331049    9924 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0317 12:08:31.331093    9924 command_runner.go:130] > ExecStart=
	I0317 12:08:31.331132    9924 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0317 12:08:31.331132    9924 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0317 12:08:31.331178    9924 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0317 12:08:31.331254    9924 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0317 12:08:31.331254    9924 command_runner.go:130] > LimitNOFILE=infinity
	I0317 12:08:31.331254    9924 command_runner.go:130] > LimitNPROC=infinity
	I0317 12:08:31.331254    9924 command_runner.go:130] > LimitCORE=infinity
	I0317 12:08:31.331254    9924 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0317 12:08:31.331297    9924 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0317 12:08:31.331297    9924 command_runner.go:130] > TasksMax=infinity
	I0317 12:08:31.331297    9924 command_runner.go:130] > TimeoutStartSec=0
	I0317 12:08:31.331333    9924 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0317 12:08:31.331333    9924 command_runner.go:130] > Delegate=yes
	I0317 12:08:31.331333    9924 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0317 12:08:31.331333    9924 command_runner.go:130] > KillMode=process
	I0317 12:08:31.331333    9924 command_runner.go:130] > [Install]
	I0317 12:08:31.331333    9924 command_runner.go:130] > WantedBy=multi-user.target
	I0317 12:08:31.342262    9924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0317 12:08:31.374388    9924 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0317 12:08:31.418287    9924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0317 12:08:31.451224    9924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0317 12:08:31.487420    9924 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0317 12:08:31.557810    9924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0317 12:08:31.580837    9924 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0317 12:08:31.612413    9924 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0317 12:08:31.625437    9924 ssh_runner.go:195] Run: which cri-dockerd
	I0317 12:08:31.630769    9924 command_runner.go:130] > /usr/bin/cri-dockerd
	I0317 12:08:31.641937    9924 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0317 12:08:31.657823    9924 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0317 12:08:31.699073    9924 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0317 12:08:31.892927    9924 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0317 12:08:32.091417    9924 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0317 12:08:32.091831    9924 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0317 12:08:32.139644    9924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 12:08:32.348342    9924 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0317 12:08:35.274485    9924 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.9259163s)
	I0317 12:08:35.287388    9924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0317 12:08:35.323396    9924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0317 12:08:35.357691    9924 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0317 12:08:35.564436    9924 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0317 12:08:35.753145    9924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 12:08:35.956282    9924 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0317 12:08:35.998312    9924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0317 12:08:36.032079    9924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 12:08:36.276003    9924 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0317 12:08:36.388020    9924 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0317 12:08:36.399660    9924 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0317 12:08:36.408319    9924 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0317 12:08:36.408319    9924 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0317 12:08:36.408319    9924 command_runner.go:130] > Device: 0,22	Inode: 885         Links: 1
	I0317 12:08:36.408483    9924 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0317 12:08:36.408483    9924 command_runner.go:130] > Access: 2025-03-17 12:08:36.332803610 +0000
	I0317 12:08:36.408483    9924 command_runner.go:130] > Modify: 2025-03-17 12:08:36.332803610 +0000
	I0317 12:08:36.408483    9924 command_runner.go:130] > Change: 2025-03-17 12:08:36.335803624 +0000
	I0317 12:08:36.408483    9924 command_runner.go:130] >  Birth: -
	I0317 12:08:36.408579    9924 start.go:563] Will wait 60s for crictl version
	I0317 12:08:36.420881    9924 ssh_runner.go:195] Run: which crictl
	I0317 12:08:36.428326    9924 command_runner.go:130] > /usr/bin/crictl
	I0317 12:08:36.440770    9924 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0317 12:08:36.493788    9924 command_runner.go:130] > Version:  0.1.0
	I0317 12:08:36.493788    9924 command_runner.go:130] > RuntimeName:  docker
	I0317 12:08:36.493788    9924 command_runner.go:130] > RuntimeVersion:  27.4.0
	I0317 12:08:36.493788    9924 command_runner.go:130] > RuntimeApiVersion:  v1
	I0317 12:08:36.496424    9924 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0317 12:08:36.506881    9924 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0317 12:08:36.538517    9924 command_runner.go:130] > 27.4.0
	I0317 12:08:36.549069    9924 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0317 12:08:36.580554    9924 command_runner.go:130] > 27.4.0
	I0317 12:08:36.585063    9924 out.go:235] * Preparing Kubernetes v1.32.2 on Docker 27.4.0 ...
	I0317 12:08:36.585133    9924 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0317 12:08:36.588935    9924 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0317 12:08:36.588935    9924 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0317 12:08:36.588935    9924 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0317 12:08:36.588935    9924 ip.go:211] Found interface: {Index:13 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:4b:84:d5 Flags:up|broadcast|multicast|running}
	I0317 12:08:36.591773    9924 ip.go:214] interface addr: fe80::f0c7:c31c:6237:ef35/64
	I0317 12:08:36.591773    9924 ip.go:214] interface addr: 172.25.16.1/20
	I0317 12:08:36.601255    9924 ssh_runner.go:195] Run: grep 172.25.16.1	host.minikube.internal$ /etc/hosts
	I0317 12:08:36.607048    9924 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.25.16.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 12:08:36.635435    9924 kubeadm.go:883] updating cluster {Name:multinode-781100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:multinode-7
81100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.16.124 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0317 12:08:36.636065    9924 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0317 12:08:36.645024    9924 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0317 12:08:36.669616    9924 docker.go:689] Got preloaded images: 
	I0317 12:08:36.669706    9924 docker.go:695] registry.k8s.io/kube-apiserver:v1.32.2 wasn't preloaded
	I0317 12:08:36.680427    9924 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0317 12:08:36.697996    9924 command_runner.go:139] > {"Repositories":{}}
	I0317 12:08:36.709741    9924 ssh_runner.go:195] Run: which lz4
	I0317 12:08:36.716204    9924 command_runner.go:130] > /usr/bin/lz4
	I0317 12:08:36.717000    9924 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0317 12:08:36.729038    9924 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0317 12:08:36.734565    9924 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0317 12:08:36.735244    9924 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0317 12:08:36.735306    9924 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (349803115 bytes)
	I0317 12:08:38.666698    9924 docker.go:653] duration metric: took 1.9493427s to copy over tarball
	I0317 12:08:38.679327    9924 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0317 12:08:47.498278    9924 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.818861s)
	I0317 12:08:47.498278    9924 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0317 12:08:47.562366    9924 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0317 12:08:47.581885    9924 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.3":"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e":"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.16-0":"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5":"sha256:a9e7e6b294baf1695fccb862d95
6c5d3ad8510e1e4ca1535f35dc09f247abbfc"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.32.2":"sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef","registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f":"sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.32.2":"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389","registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90":"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.32.2":"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5","registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d":"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68f
f49a87c2266ebc5"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.32.2":"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d","registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76":"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.10":"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a":"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136"}}}
	I0317 12:08:47.582471    9924 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0317 12:08:47.624562    9924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 12:08:47.822616    9924 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0317 12:08:52.821688    9924 ssh_runner.go:235] Completed: sudo systemctl restart docker: (4.9990212s)
	I0317 12:08:52.830582    9924 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0317 12:08:52.859277    9924 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.32.2
	I0317 12:08:52.859415    9924 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.32.2
	I0317 12:08:52.859415    9924 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.32.2
	I0317 12:08:52.859415    9924 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.32.2
	I0317 12:08:52.859415    9924 command_runner.go:130] > registry.k8s.io/etcd:3.5.16-0
	I0317 12:08:52.859415    9924 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.3
	I0317 12:08:52.859415    9924 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0317 12:08:52.859415    9924 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0317 12:08:52.859415    9924 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.32.2
	registry.k8s.io/kube-scheduler:v1.32.2
	registry.k8s.io/kube-proxy:v1.32.2
	registry.k8s.io/kube-controller-manager:v1.32.2
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0317 12:08:52.859415    9924 cache_images.go:84] Images are preloaded, skipping loading
	I0317 12:08:52.859415    9924 kubeadm.go:934] updating node { 172.25.16.124 8443 v1.32.2 docker true true} ...
	I0317 12:08:52.859415    9924 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-781100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.16.124
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:multinode-781100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0317 12:08:52.870377    9924 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0317 12:08:52.934807    9924 command_runner.go:130] > cgroupfs
	I0317 12:08:52.934807    9924 cni.go:84] Creating CNI manager for ""
	I0317 12:08:52.934807    9924 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0317 12:08:52.934807    9924 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0317 12:08:52.934807    9924 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.25.16.124 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-781100 NodeName:multinode-781100 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.25.16.124"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.25.16.124 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0317 12:08:52.935610    9924 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.25.16.124
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-781100"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "172.25.16.124"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.25.16.124"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0317 12:08:52.948479    9924 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0317 12:08:52.965245    9924 command_runner.go:130] > kubeadm
	I0317 12:08:52.965245    9924 command_runner.go:130] > kubectl
	I0317 12:08:52.965245    9924 command_runner.go:130] > kubelet
	I0317 12:08:52.965245    9924 binaries.go:44] Found k8s binaries, skipping transfer
	I0317 12:08:52.975243    9924 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0317 12:08:52.990925    9924 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0317 12:08:53.018433    9924 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0317 12:08:53.053922    9924 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I0317 12:08:53.097878    9924 ssh_runner.go:195] Run: grep 172.25.16.124	control-plane.minikube.internal$ /etc/hosts
	I0317 12:08:53.103660    9924 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.25.16.124	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 12:08:53.144388    9924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 12:08:53.336829    9924 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 12:08:53.364398    9924 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-781100 for IP: 172.25.16.124
	I0317 12:08:53.364455    9924 certs.go:194] generating shared ca certs ...
	I0317 12:08:53.364531    9924 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:08:53.364880    9924 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0317 12:08:53.365842    9924 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0317 12:08:53.366047    9924 certs.go:256] generating profile certs ...
	I0317 12:08:53.366742    9924 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-781100\client.key
	I0317 12:08:53.366882    9924 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-781100\client.crt with IP's: []
	I0317 12:08:53.507877    9924 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-781100\client.crt ...
	I0317 12:08:53.507877    9924 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-781100\client.crt: {Name:mk1bad38d20220ed164a71a22fbf5bd0736d47ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:08:53.509836    9924 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-781100\client.key ...
	I0317 12:08:53.509836    9924 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-781100\client.key: {Name:mkb51d0e1ec7b1ecdbe1a3a68161c76d5951e3c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:08:53.511197    9924 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-781100\apiserver.key.1f330098
	I0317 12:08:53.511419    9924 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-781100\apiserver.crt.1f330098 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.25.16.124]
	I0317 12:08:53.624159    9924 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-781100\apiserver.crt.1f330098 ...
	I0317 12:08:53.624159    9924 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-781100\apiserver.crt.1f330098: {Name:mk1baf5a9240613b1a4b9cca56494b04cbdedb09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:08:53.625666    9924 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-781100\apiserver.key.1f330098 ...
	I0317 12:08:53.625666    9924 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-781100\apiserver.key.1f330098: {Name:mk4afdfda98081748050a1566233238ba3bf1ffa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:08:53.626049    9924 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-781100\apiserver.crt.1f330098 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-781100\apiserver.crt
	I0317 12:08:53.642674    9924 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-781100\apiserver.key.1f330098 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-781100\apiserver.key
	I0317 12:08:53.644074    9924 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-781100\proxy-client.key
	I0317 12:08:53.644286    9924 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-781100\proxy-client.crt with IP's: []
	I0317 12:08:54.395098    9924 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-781100\proxy-client.crt ...
	I0317 12:08:54.395098    9924 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-781100\proxy-client.crt: {Name:mkaa86b337f826ba194d7d2c62896bfe4e8d8042 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:08:54.396771    9924 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-781100\proxy-client.key ...
	I0317 12:08:54.396771    9924 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-781100\proxy-client.key: {Name:mkeece2ed1a9d7165892b31646c51a23dd01ed4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:08:54.397724    9924 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0317 12:08:54.397724    9924 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0317 12:08:54.397724    9924 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0317 12:08:54.399059    9924 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0317 12:08:54.399314    9924 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-781100\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0317 12:08:54.399460    9924 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-781100\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0317 12:08:54.399637    9924 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-781100\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0317 12:08:54.411813    9924 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-781100\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0317 12:08:54.412137    9924 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8940.pem (1338 bytes)
	W0317 12:08:54.413064    9924 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8940_empty.pem, impossibly tiny 0 bytes
	I0317 12:08:54.413064    9924 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0317 12:08:54.413327    9924 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0317 12:08:54.413719    9924 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0317 12:08:54.413931    9924 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0317 12:08:54.414246    9924 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem (1708 bytes)
	I0317 12:08:54.414246    9924 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem -> /usr/share/ca-certificates/89402.pem
	I0317 12:08:54.415121    9924 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0317 12:08:54.415121    9924 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8940.pem -> /usr/share/ca-certificates/8940.pem
	I0317 12:08:54.415844    9924 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0317 12:08:54.462816    9924 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0317 12:08:54.501127    9924 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0317 12:08:54.548703    9924 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0317 12:08:54.596058    9924 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-781100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0317 12:08:54.646377    9924 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-781100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0317 12:08:54.692803    9924 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-781100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0317 12:08:54.741209    9924 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-781100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0317 12:08:54.788831    9924 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem --> /usr/share/ca-certificates/89402.pem (1708 bytes)
	I0317 12:08:54.844556    9924 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0317 12:08:54.894624    9924 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8940.pem --> /usr/share/ca-certificates/8940.pem (1338 bytes)
	I0317 12:08:54.938701    9924 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0317 12:08:54.981248    9924 ssh_runner.go:195] Run: openssl version
	I0317 12:08:54.990532    9924 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0317 12:08:55.003638    9924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/89402.pem && ln -fs /usr/share/ca-certificates/89402.pem /etc/ssl/certs/89402.pem"
	I0317 12:08:55.033990    9924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/89402.pem
	I0317 12:08:55.042037    9924 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar 17 10:46 /usr/share/ca-certificates/89402.pem
	I0317 12:08:55.042129    9924 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 17 10:46 /usr/share/ca-certificates/89402.pem
	I0317 12:08:55.055862    9924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/89402.pem
	I0317 12:08:55.066028    9924 command_runner.go:130] > 3ec20f2e
	I0317 12:08:55.078549    9924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/89402.pem /etc/ssl/certs/3ec20f2e.0"
	I0317 12:08:55.108147    9924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0317 12:08:55.140433    9924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0317 12:08:55.147467    9924 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar 17 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0317 12:08:55.147557    9924 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 17 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0317 12:08:55.159097    9924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0317 12:08:55.168586    9924 command_runner.go:130] > b5213941
	I0317 12:08:55.180469    9924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0317 12:08:55.210014    9924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8940.pem && ln -fs /usr/share/ca-certificates/8940.pem /etc/ssl/certs/8940.pem"
	I0317 12:08:55.237786    9924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8940.pem
	I0317 12:08:55.244536    9924 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar 17 10:46 /usr/share/ca-certificates/8940.pem
	I0317 12:08:55.244655    9924 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 17 10:46 /usr/share/ca-certificates/8940.pem
	I0317 12:08:55.255380    9924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8940.pem
	I0317 12:08:55.264906    9924 command_runner.go:130] > 51391683
	I0317 12:08:55.274464    9924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8940.pem /etc/ssl/certs/51391683.0"
	I0317 12:08:55.304472    9924 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0317 12:08:55.310484    9924 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0317 12:08:55.311077    9924 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0317 12:08:55.311323    9924 kubeadm.go:392] StartCluster: {Name:multinode-781100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:multinode-7811
00 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.16.124 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 12:08:55.320575    9924 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0317 12:08:55.355795    9924 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0317 12:08:55.372762    9924 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0317 12:08:55.373354    9924 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0317 12:08:55.373354    9924 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0317 12:08:55.385520    9924 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0317 12:08:55.412930    9924 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0317 12:08:55.429956    9924 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0317 12:08:55.430016    9924 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0317 12:08:55.430016    9924 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0317 12:08:55.430016    9924 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0317 12:08:55.430599    9924 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0317 12:08:55.430599    9924 kubeadm.go:157] found existing configuration files:
	
	I0317 12:08:55.442516    9924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0317 12:08:55.457845    9924 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0317 12:08:55.457845    9924 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0317 12:08:55.468880    9924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0317 12:08:55.499613    9924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0317 12:08:55.516934    9924 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0317 12:08:55.516996    9924 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0317 12:08:55.528195    9924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0317 12:08:55.557816    9924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0317 12:08:55.574200    9924 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0317 12:08:55.574200    9924 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0317 12:08:55.583329    9924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0317 12:08:55.612538    9924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0317 12:08:55.628500    9924 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0317 12:08:55.628625    9924 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0317 12:08:55.640652    9924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0317 12:08:55.656833    9924 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0317 12:08:56.072929    9924 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0317 12:08:56.072929    9924 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0317 12:09:10.005231    9924 command_runner.go:130] > [init] Using Kubernetes version: v1.32.2
	I0317 12:09:10.005231    9924 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0317 12:09:10.005383    9924 command_runner.go:130] > [preflight] Running pre-flight checks
	I0317 12:09:10.005453    9924 kubeadm.go:310] [preflight] Running pre-flight checks
	I0317 12:09:10.005632    9924 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0317 12:09:10.005632    9924 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0317 12:09:10.005632    9924 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0317 12:09:10.005632    9924 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0317 12:09:10.005632    9924 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0317 12:09:10.005632    9924 command_runner.go:130] > [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0317 12:09:10.006293    9924 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0317 12:09:10.006293    9924 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0317 12:09:10.009009    9924 out.go:235]   - Generating certificates and keys ...
	I0317 12:09:10.009297    9924 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0317 12:09:10.009354    9924 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0317 12:09:10.009556    9924 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0317 12:09:10.009556    9924 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0317 12:09:10.009730    9924 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0317 12:09:10.009730    9924 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0317 12:09:10.009730    9924 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0317 12:09:10.009730    9924 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0317 12:09:10.009730    9924 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0317 12:09:10.009730    9924 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0317 12:09:10.010700    9924 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0317 12:09:10.010820    9924 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0317 12:09:10.010968    9924 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0317 12:09:10.011044    9924 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0317 12:09:10.011170    9924 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-781100] and IPs [172.25.16.124 127.0.0.1 ::1]
	I0317 12:09:10.011170    9924 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-781100] and IPs [172.25.16.124 127.0.0.1 ::1]
	I0317 12:09:10.011170    9924 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0317 12:09:10.011170    9924 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0317 12:09:10.011749    9924 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-781100] and IPs [172.25.16.124 127.0.0.1 ::1]
	I0317 12:09:10.011792    9924 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-781100] and IPs [172.25.16.124 127.0.0.1 ::1]
	I0317 12:09:10.012004    9924 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0317 12:09:10.012056    9924 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0317 12:09:10.012056    9924 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0317 12:09:10.012056    9924 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0317 12:09:10.012056    9924 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0317 12:09:10.012056    9924 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0317 12:09:10.012056    9924 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0317 12:09:10.012056    9924 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0317 12:09:10.012622    9924 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0317 12:09:10.012690    9924 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0317 12:09:10.012690    9924 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0317 12:09:10.012690    9924 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0317 12:09:10.012690    9924 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0317 12:09:10.012690    9924 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0317 12:09:10.012690    9924 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0317 12:09:10.012690    9924 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0317 12:09:10.012690    9924 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0317 12:09:10.013259    9924 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0317 12:09:10.013259    9924 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0317 12:09:10.013259    9924 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0317 12:09:10.013259    9924 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0317 12:09:10.013259    9924 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0317 12:09:10.016747    9924 out.go:235]   - Booting up control plane ...
	I0317 12:09:10.017284    9924 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0317 12:09:10.016747    9924 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0317 12:09:10.017571    9924 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0317 12:09:10.017571    9924 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0317 12:09:10.017571    9924 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0317 12:09:10.017571    9924 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0317 12:09:10.018220    9924 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0317 12:09:10.018220    9924 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0317 12:09:10.018220    9924 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0317 12:09:10.018220    9924 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0317 12:09:10.018220    9924 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0317 12:09:10.018220    9924 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0317 12:09:10.018763    9924 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0317 12:09:10.018763    9924 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0317 12:09:10.019113    9924 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0317 12:09:10.019113    9924 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0317 12:09:10.019113    9924 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.002610095s
	I0317 12:09:10.019113    9924 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002610095s
	I0317 12:09:10.019638    9924 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0317 12:09:10.019638    9924 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0317 12:09:10.019769    9924 kubeadm.go:310] [api-check] The API server is healthy after 6.502564494s
	I0317 12:09:10.019769    9924 command_runner.go:130] > [api-check] The API server is healthy after 6.502564494s
	I0317 12:09:10.019812    9924 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0317 12:09:10.019812    9924 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0317 12:09:10.019812    9924 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0317 12:09:10.019812    9924 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0317 12:09:10.019812    9924 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0317 12:09:10.019812    9924 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0317 12:09:10.019812    9924 kubeadm.go:310] [mark-control-plane] Marking the node multinode-781100 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0317 12:09:10.019812    9924 command_runner.go:130] > [mark-control-plane] Marking the node multinode-781100 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0317 12:09:10.020690    9924 command_runner.go:130] > [bootstrap-token] Using token: x68cum.sfak71vlcseqbb27
	I0317 12:09:10.020690    9924 kubeadm.go:310] [bootstrap-token] Using token: x68cum.sfak71vlcseqbb27
	I0317 12:09:10.023441    9924 out.go:235]   - Configuring RBAC rules ...
	I0317 12:09:10.024148    9924 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0317 12:09:10.024148    9924 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0317 12:09:10.024518    9924 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0317 12:09:10.024518    9924 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0317 12:09:10.024518    9924 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0317 12:09:10.024518    9924 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0317 12:09:10.025061    9924 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0317 12:09:10.025061    9924 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0317 12:09:10.025458    9924 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0317 12:09:10.025458    9924 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0317 12:09:10.025672    9924 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0317 12:09:10.025672    9924 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0317 12:09:10.025915    9924 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0317 12:09:10.025915    9924 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0317 12:09:10.026112    9924 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0317 12:09:10.026112    9924 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0317 12:09:10.026270    9924 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0317 12:09:10.026270    9924 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0317 12:09:10.026270    9924 kubeadm.go:310] 
	I0317 12:09:10.026507    9924 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0317 12:09:10.026639    9924 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0317 12:09:10.026639    9924 kubeadm.go:310] 
	I0317 12:09:10.026865    9924 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0317 12:09:10.026865    9924 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0317 12:09:10.026865    9924 kubeadm.go:310] 
	I0317 12:09:10.027001    9924 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0317 12:09:10.027001    9924 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0317 12:09:10.027051    9924 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0317 12:09:10.027228    9924 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0317 12:09:10.027558    9924 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0317 12:09:10.027558    9924 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0317 12:09:10.027558    9924 kubeadm.go:310] 
	I0317 12:09:10.027772    9924 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0317 12:09:10.027772    9924 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0317 12:09:10.027772    9924 kubeadm.go:310] 
	I0317 12:09:10.028007    9924 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0317 12:09:10.028073    9924 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0317 12:09:10.028122    9924 kubeadm.go:310] 
	I0317 12:09:10.028319    9924 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0317 12:09:10.028319    9924 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0317 12:09:10.028501    9924 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0317 12:09:10.028568    9924 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0317 12:09:10.028715    9924 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0317 12:09:10.028766    9924 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0317 12:09:10.028879    9924 kubeadm.go:310] 
	I0317 12:09:10.029136    9924 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0317 12:09:10.029136    9924 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0317 12:09:10.029327    9924 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0317 12:09:10.029327    9924 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0317 12:09:10.029327    9924 kubeadm.go:310] 
	I0317 12:09:10.029676    9924 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token x68cum.sfak71vlcseqbb27 \
	I0317 12:09:10.029676    9924 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token x68cum.sfak71vlcseqbb27 \
	I0317 12:09:10.029676    9924 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c322b0259bb8a6b4c6c1dc77ade13bbf0d2f6b9bd2605c58fcd3743199330256 \
	I0317 12:09:10.029676    9924 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:c322b0259bb8a6b4c6c1dc77ade13bbf0d2f6b9bd2605c58fcd3743199330256 \
	I0317 12:09:10.029676    9924 kubeadm.go:310] 	--control-plane 
	I0317 12:09:10.029676    9924 command_runner.go:130] > 	--control-plane 
	I0317 12:09:10.029676    9924 kubeadm.go:310] 
	I0317 12:09:10.029676    9924 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0317 12:09:10.029676    9924 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0317 12:09:10.029676    9924 kubeadm.go:310] 
	I0317 12:09:10.029676    9924 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token x68cum.sfak71vlcseqbb27 \
	I0317 12:09:10.029676    9924 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token x68cum.sfak71vlcseqbb27 \
	I0317 12:09:10.030649    9924 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c322b0259bb8a6b4c6c1dc77ade13bbf0d2f6b9bd2605c58fcd3743199330256 
	I0317 12:09:10.030649    9924 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:c322b0259bb8a6b4c6c1dc77ade13bbf0d2f6b9bd2605c58fcd3743199330256 
	I0317 12:09:10.030649    9924 cni.go:84] Creating CNI manager for ""
	I0317 12:09:10.030811    9924 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0317 12:09:10.036055    9924 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0317 12:09:10.049464    9924 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0317 12:09:10.057500    9924 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0317 12:09:10.057500    9924 command_runner.go:130] >   Size: 3103192   	Blocks: 6064       IO Block: 4096   regular file
	I0317 12:09:10.057500    9924 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I0317 12:09:10.057500    9924 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0317 12:09:10.057500    9924 command_runner.go:130] > Access: 2025-03-17 12:07:09.766788300 +0000
	I0317 12:09:10.057500    9924 command_runner.go:130] > Modify: 2025-01-14 09:03:58.000000000 +0000
	I0317 12:09:10.057500    9924 command_runner.go:130] > Change: 2025-03-17 12:07:01.085000000 +0000
	I0317 12:09:10.057500    9924 command_runner.go:130] >  Birth: -
	I0317 12:09:10.057500    9924 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0317 12:09:10.057500    9924 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0317 12:09:10.100741    9924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0317 12:09:10.738140    9924 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0317 12:09:10.771003    9924 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0317 12:09:10.813031    9924 command_runner.go:130] > serviceaccount/kindnet created
	I0317 12:09:10.884076    9924 command_runner.go:130] > daemonset.apps/kindnet created
	I0317 12:09:10.887323    9924 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0317 12:09:10.901104    9924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-781100 minikube.k8s.io/updated_at=2025_03_17T12_09_10_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=28b3ce799b018a38b7c40f89b465976263272e76 minikube.k8s.io/name=multinode-781100 minikube.k8s.io/primary=true
	I0317 12:09:10.903099    9924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 12:09:10.911759    9924 command_runner.go:130] > -16
	I0317 12:09:10.912311    9924 ops.go:34] apiserver oom_adj: -16
	I0317 12:09:11.112777    9924 command_runner.go:130] > node/multinode-781100 labeled
	I0317 12:09:11.136902    9924 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0317 12:09:11.149569    9924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 12:09:11.258664    9924 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0317 12:09:11.651653    9924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 12:09:11.761176    9924 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0317 12:09:12.151295    9924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 12:09:12.259981    9924 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0317 12:09:12.651050    9924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 12:09:12.751970    9924 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0317 12:09:13.151264    9924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 12:09:13.257795    9924 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0317 12:09:13.648243    9924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 12:09:13.801152    9924 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0317 12:09:14.151063    9924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 12:09:14.262120    9924 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0317 12:09:14.650028    9924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 12:09:14.829152    9924 command_runner.go:130] > NAME      SECRETS   AGE
	I0317 12:09:14.829241    9924 command_runner.go:130] > default   0         0s
	I0317 12:09:14.829241    9924 kubeadm.go:1113] duration metric: took 3.9418778s to wait for elevateKubeSystemPrivileges
	I0317 12:09:14.829409    9924 kubeadm.go:394] duration metric: took 19.5178872s to StartCluster
	I0317 12:09:14.829409    9924 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:09:14.829605    9924 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0317 12:09:14.830691    9924 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:09:14.832720    9924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0317 12:09:14.832720    9924 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0317 12:09:14.832720    9924 addons.go:69] Setting storage-provisioner=true in profile "multinode-781100"
	I0317 12:09:14.832720    9924 start.go:235] Will wait 6m0s for node &{Name: IP:172.25.16.124 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0317 12:09:14.832720    9924 addons.go:238] Setting addon storage-provisioner=true in "multinode-781100"
	I0317 12:09:14.832720    9924 addons.go:69] Setting default-storageclass=true in profile "multinode-781100"
	I0317 12:09:14.832720    9924 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-781100"
	I0317 12:09:14.832720    9924 config.go:182] Loaded profile config "multinode-781100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0317 12:09:14.832720    9924 host.go:66] Checking if "multinode-781100" exists ...
	I0317 12:09:14.833721    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100 ).state
	I0317 12:09:14.834702    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100 ).state
	I0317 12:09:14.835702    9924 out.go:177] * Verifying Kubernetes components...
	I0317 12:09:14.855694    9924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 12:09:15.134867    9924 command_runner.go:130] > apiVersion: v1
	I0317 12:09:15.135503    9924 command_runner.go:130] > data:
	I0317 12:09:15.135503    9924 command_runner.go:130] >   Corefile: |
	I0317 12:09:15.135503    9924 command_runner.go:130] >     .:53 {
	I0317 12:09:15.135503    9924 command_runner.go:130] >         errors
	I0317 12:09:15.135503    9924 command_runner.go:130] >         health {
	I0317 12:09:15.135503    9924 command_runner.go:130] >            lameduck 5s
	I0317 12:09:15.135503    9924 command_runner.go:130] >         }
	I0317 12:09:15.135503    9924 command_runner.go:130] >         ready
	I0317 12:09:15.135503    9924 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0317 12:09:15.135503    9924 command_runner.go:130] >            pods insecure
	I0317 12:09:15.135651    9924 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0317 12:09:15.135651    9924 command_runner.go:130] >            ttl 30
	I0317 12:09:15.135651    9924 command_runner.go:130] >         }
	I0317 12:09:15.135651    9924 command_runner.go:130] >         prometheus :9153
	I0317 12:09:15.135651    9924 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0317 12:09:15.135725    9924 command_runner.go:130] >            max_concurrent 1000
	I0317 12:09:15.135725    9924 command_runner.go:130] >         }
	I0317 12:09:15.135747    9924 command_runner.go:130] >         cache 30 {
	I0317 12:09:15.135747    9924 command_runner.go:130] >            disable success cluster.local
	I0317 12:09:15.135747    9924 command_runner.go:130] >            disable denial cluster.local
	I0317 12:09:15.135747    9924 command_runner.go:130] >         }
	I0317 12:09:15.135808    9924 command_runner.go:130] >         loop
	I0317 12:09:15.135834    9924 command_runner.go:130] >         reload
	I0317 12:09:15.135834    9924 command_runner.go:130] >         loadbalance
	I0317 12:09:15.135834    9924 command_runner.go:130] >     }
	I0317 12:09:15.135834    9924 command_runner.go:130] > kind: ConfigMap
	I0317 12:09:15.135872    9924 command_runner.go:130] > metadata:
	I0317 12:09:15.135892    9924 command_runner.go:130] >   creationTimestamp: "2025-03-17T12:09:09Z"
	I0317 12:09:15.135892    9924 command_runner.go:130] >   name: coredns
	I0317 12:09:15.135892    9924 command_runner.go:130] >   namespace: kube-system
	I0317 12:09:15.135892    9924 command_runner.go:130] >   resourceVersion: "259"
	I0317 12:09:15.135892    9924 command_runner.go:130] >   uid: 25e6d1f6-eda4-469b-8b68-6eb0edc8365b
	I0317 12:09:15.136883    9924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.25.16.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0317 12:09:15.243042    9924 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 12:09:15.607780    9924 command_runner.go:130] > configmap/coredns replaced
	I0317 12:09:15.607866    9924 start.go:971] {"host.minikube.internal": 172.25.16.1} host record injected into CoreDNS's ConfigMap
	I0317 12:09:15.609321    9924 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0317 12:09:15.609864    9924 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0317 12:09:15.609864    9924 kapi.go:59] client config for multinode-781100: &rest.Config{Host:"https://172.25.16.124:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-781100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-781100\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2e292e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0317 12:09:15.610630    9924 kapi.go:59] client config for multinode-781100: &rest.Config{Host:"https://172.25.16.124:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-781100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-781100\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2e292e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0317 12:09:15.611801    9924 cert_rotation.go:140] Starting client certificate rotation controller
	I0317 12:09:15.611877    9924 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0317 12:09:15.611877    9924 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0317 12:09:15.611952    9924 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0317 12:09:15.611952    9924 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0317 12:09:15.612527    9924 node_ready.go:35] waiting up to 6m0s for node "multinode-781100" to be "Ready" ...
	I0317 12:09:15.612907    9924 type.go:168] "Request Body" body=""
	I0317 12:09:15.612907    9924 deployment.go:95] "Request Body" body=""
	I0317 12:09:15.613066    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100
	I0317 12:09:15.613091    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:15.613091    9924 round_trippers.go:470] GET https://172.25.16.124:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0317 12:09:15.613091    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:15.613091    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:15.613091    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:15.613091    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:15.613091    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:15.634185    9924 round_trippers.go:581] Response Status: 200 OK in 20 milliseconds
	I0317 12:09:15.634185    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:15.634185    9924 round_trippers.go:587]     Audit-Id: 7e979ccf-abe0-4ad5-baac-d14211470541
	I0317 12:09:15.634270    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:15.634270    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:15.634270    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:15.634270    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:15.634361    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:15 GMT
	I0317 12:09:15.634361    9924 round_trippers.go:581] Response Status: 200 OK in 21 milliseconds
	I0317 12:09:15.634522    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:15.634770    9924 round_trippers.go:587]     Audit-Id: f66231be-cdc1-44bd-859f-36bc5b9345d6
	I0317 12:09:15.634770    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:15.634770    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:15.634770    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:15.634770    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:15.634770    9924 round_trippers.go:587]     Content-Length: 144
	I0317 12:09:15.634770    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:15 GMT
	I0317 12:09:15.634770    9924 deployment.go:95] "Response Body" body=<
		00000000  6b 38 73 00 0a 17 0a 0e  61 75 74 6f 73 63 61 6c  |k8s.....autoscal|
		00000010  69 6e 67 2f 76 31 12 05  53 63 61 6c 65 12 6d 0a  |ing/v1..Scale.m.|
		00000020  51 0a 07 63 6f 72 65 64  6e 73 12 00 1a 0b 6b 75  |Q..coredns....ku|
		00000030  62 65 2d 73 79 73 74 65  6d 22 00 2a 24 33 33 62  |be-system".*$33b|
		00000040  63 34 32 37 37 2d 63 39  65 34 2d 34 31 34 35 2d  |c4277-c9e4-4145-|
		00000050  62 61 61 61 2d 66 61 33  39 34 34 38 65 37 38 39  |baaa-fa39448e789|
		00000060  33 32 03 33 38 30 38 00  42 08 08 e5 a1 e0 be 06  |32.3808.B.......|
		00000070  10 00 12 02 08 02 1a 14  08 02 12 10 6b 38 73 2d  |............k8s-|
		00000080  61 70 70 3d 6b 75 62 65  2d 64 6e 73 1a 00 22 00  |app=kube-dns..".|
	 >
	I0317 12:09:15.635330    9924 deployment.go:111] "Request Body" body=<
		00000000  6b 38 73 00 0a 17 0a 0e  61 75 74 6f 73 63 61 6c  |k8s.....autoscal|
		00000010  69 6e 67 2f 76 31 12 05  53 63 61 6c 65 12 6d 0a  |ing/v1..Scale.m.|
		00000020  51 0a 07 63 6f 72 65 64  6e 73 12 00 1a 0b 6b 75  |Q..coredns....ku|
		00000030  62 65 2d 73 79 73 74 65  6d 22 00 2a 24 33 33 62  |be-system".*$33b|
		00000040  63 34 32 37 37 2d 63 39  65 34 2d 34 31 34 35 2d  |c4277-c9e4-4145-|
		00000050  62 61 61 61 2d 66 61 33  39 34 34 38 65 37 38 39  |baaa-fa39448e789|
		00000060  33 32 03 33 38 30 38 00  42 08 08 e5 a1 e0 be 06  |32.3808.B.......|
		00000070  10 00 12 02 08 01 1a 14  08 02 12 10 6b 38 73 2d  |............k8s-|
		00000080  61 70 70 3d 6b 75 62 65  2d 64 6e 73 1a 00 22 00  |app=kube-dns..".|
	 >
	I0317 12:09:15.635330    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bb 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 37 38 31 31 30 30  12 00 1a 00 22 00 2a 24  |e-781100....".*$|
		00000030  61 61 65 38 30 62 63 35  2d 34 33 30 37 2d 34 31  |aae80bc5-4307-41|
		00000040  31 37 2d 39 37 35 37 2d  35 32 62 61 38 31 65 64  |17-9757-52ba81ed|
		00000050  65 37 33 39 32 03 33 32  36 38 00 42 08 08 e2 a1  |e7392.3268.B....|
		00000060  e0 be 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20924 chars]
	 >
	I0317 12:09:15.635485    9924 round_trippers.go:470] PUT https://172.25.16.124:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0317 12:09:15.635485    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:15.635591    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:15.635591    9924 round_trippers.go:480]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:15.635647    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:15.655431    9924 round_trippers.go:581] Response Status: 200 OK in 19 milliseconds
	I0317 12:09:15.655521    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:15.655521    9924 round_trippers.go:587]     Content-Length: 144
	I0317 12:09:15.655521    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:15 GMT
	I0317 12:09:15.655521    9924 round_trippers.go:587]     Audit-Id: 808123fd-bb00-4539-9960-f4c8190834dc
	I0317 12:09:15.655607    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:15.655607    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:15.655607    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:15.655607    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:15.655687    9924 deployment.go:111] "Response Body" body=<
		00000000  6b 38 73 00 0a 17 0a 0e  61 75 74 6f 73 63 61 6c  |k8s.....autoscal|
		00000010  69 6e 67 2f 76 31 12 05  53 63 61 6c 65 12 6d 0a  |ing/v1..Scale.m.|
		00000020  51 0a 07 63 6f 72 65 64  6e 73 12 00 1a 0b 6b 75  |Q..coredns....ku|
		00000030  62 65 2d 73 79 73 74 65  6d 22 00 2a 24 33 33 62  |be-system".*$33b|
		00000040  63 34 32 37 37 2d 63 39  65 34 2d 34 31 34 35 2d  |c4277-c9e4-4145-|
		00000050  62 61 61 61 2d 66 61 33  39 34 34 38 65 37 38 39  |baaa-fa39448e789|
		00000060  33 32 03 33 38 32 38 00  42 08 08 e5 a1 e0 be 06  |32.3828.B.......|
		00000070  10 00 12 02 08 01 1a 14  08 02 12 10 6b 38 73 2d  |............k8s-|
		00000080  61 70 70 3d 6b 75 62 65  2d 64 6e 73 1a 00 22 00  |app=kube-dns..".|
	 >
	I0317 12:09:16.112688    9924 type.go:168] "Request Body" body=""
	I0317 12:09:16.112688    9924 deployment.go:95] "Request Body" body=""
	I0317 12:09:16.112688    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100
	I0317 12:09:16.112688    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:16.112688    9924 round_trippers.go:470] GET https://172.25.16.124:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0317 12:09:16.112688    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:16.112688    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:16.112688    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:16.112688    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:16.112688    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:16.118082    9924 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 12:09:16.118082    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:16.118082    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:16.118210    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:16.118210    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:16.118210    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:16 GMT
	I0317 12:09:16.118210    9924 round_trippers.go:587]     Audit-Id: 0d49e6fc-9186-423f-9c64-a1e976f736bb
	I0317 12:09:16.118210    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:16.118709    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bb 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 37 38 31 31 30 30  12 00 1a 00 22 00 2a 24  |e-781100....".*$|
		00000030  61 61 65 38 30 62 63 35  2d 34 33 30 37 2d 34 31  |aae80bc5-4307-41|
		00000040  31 37 2d 39 37 35 37 2d  35 32 62 61 38 31 65 64  |17-9757-52ba81ed|
		00000050  65 37 33 39 32 03 33 32  36 38 00 42 08 08 e2 a1  |e7392.3268.B....|
		00000060  e0 be 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20924 chars]
	 >
	I0317 12:09:16.120272    9924 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0317 12:09:16.120272    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:16.120272    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:16.120272    9924 round_trippers.go:587]     Content-Length: 144
	I0317 12:09:16.120272    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:16 GMT
	I0317 12:09:16.120272    9924 round_trippers.go:587]     Audit-Id: 3438cd64-2264-411f-a0a9-01ed259ab256
	I0317 12:09:16.120272    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:16.120410    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:16.120410    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:16.120487    9924 deployment.go:95] "Response Body" body=<
		00000000  6b 38 73 00 0a 17 0a 0e  61 75 74 6f 73 63 61 6c  |k8s.....autoscal|
		00000010  69 6e 67 2f 76 31 12 05  53 63 61 6c 65 12 6d 0a  |ing/v1..Scale.m.|
		00000020  51 0a 07 63 6f 72 65 64  6e 73 12 00 1a 0b 6b 75  |Q..coredns....ku|
		00000030  62 65 2d 73 79 73 74 65  6d 22 00 2a 24 33 33 62  |be-system".*$33b|
		00000040  63 34 32 37 37 2d 63 39  65 34 2d 34 31 34 35 2d  |c4277-c9e4-4145-|
		00000050  62 61 61 61 2d 66 61 33  39 34 34 38 65 37 38 39  |baaa-fa39448e789|
		00000060  33 32 03 33 39 34 38 00  42 08 08 e5 a1 e0 be 06  |32.3948.B.......|
		00000070  10 00 12 02 08 01 1a 14  08 01 12 10 6b 38 73 2d  |............k8s-|
		00000080  61 70 70 3d 6b 75 62 65  2d 64 6e 73 1a 00 22 00  |app=kube-dns..".|
	 >
	I0317 12:09:16.120588    9924 kapi.go:214] "coredns" deployment in "kube-system" namespace and "multinode-781100" context rescaled to 1 replicas
	I0317 12:09:16.613096    9924 type.go:168] "Request Body" body=""
	I0317 12:09:16.613096    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100
	I0317 12:09:16.613096    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:16.613096    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:16.613096    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:16.618104    9924 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 12:09:16.618182    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:16.618182    9924 round_trippers.go:587]     Audit-Id: 8cc41a76-af9a-40f6-ab8f-9385b9e9c38c
	I0317 12:09:16.618182    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:16.618182    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:16.618182    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:16.618182    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:16.618265    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:16 GMT
	I0317 12:09:16.619003    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bb 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 37 38 31 31 30 30  12 00 1a 00 22 00 2a 24  |e-781100....".*$|
		00000030  61 61 65 38 30 62 63 35  2d 34 33 30 37 2d 34 31  |aae80bc5-4307-41|
		00000040  31 37 2d 39 37 35 37 2d  35 32 62 61 38 31 65 64  |17-9757-52ba81ed|
		00000050  65 37 33 39 32 03 33 32  36 38 00 42 08 08 e2 a1  |e7392.3268.B....|
		00000060  e0 be 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20924 chars]
	 >
	I0317 12:09:17.113197    9924 type.go:168] "Request Body" body=""
	I0317 12:09:17.113266    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100
	I0317 12:09:17.113266    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:17.113266    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:17.113266    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:17.117731    9924 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0317 12:09:17.117731    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:17.117731    9924 round_trippers.go:587]     Audit-Id: 6a65b415-4ded-44bd-8a1a-60f522c268d3
	I0317 12:09:17.117833    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:17.117833    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:17.117833    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:17.117833    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:17.117833    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:17 GMT
	I0317 12:09:17.118220    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bb 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 37 38 31 31 30 30  12 00 1a 00 22 00 2a 24  |e-781100....".*$|
		00000030  61 61 65 38 30 62 63 35  2d 34 33 30 37 2d 34 31  |aae80bc5-4307-41|
		00000040  31 37 2d 39 37 35 37 2d  35 32 62 61 38 31 65 64  |17-9757-52ba81ed|
		00000050  65 37 33 39 32 03 33 32  36 38 00 42 08 08 e2 a1  |e7392.3268.B....|
		00000060  e0 be 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20924 chars]
	 >
	I0317 12:09:17.251638    9924 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:09:17.252598    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:09:17.252598    9924 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:09:17.252646    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:09:17.253808    9924 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0317 12:09:17.253808    9924 kapi.go:59] client config for multinode-781100: &rest.Config{Host:"https://172.25.16.124:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-781100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-781100\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2e292e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0317 12:09:17.254715    9924 addons.go:238] Setting addon default-storageclass=true in "multinode-781100"
	I0317 12:09:17.254715    9924 host.go:66] Checking if "multinode-781100" exists ...
	I0317 12:09:17.255649    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100 ).state
	I0317 12:09:17.256654    9924 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0317 12:09:17.259646    9924 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0317 12:09:17.259646    9924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0317 12:09:17.259646    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100 ).state
	I0317 12:09:17.612644    9924 type.go:168] "Request Body" body=""
	I0317 12:09:17.612644    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100
	I0317 12:09:17.612644    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:17.612644    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:17.612644    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:17.616663    9924 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 12:09:17.616663    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:17.616663    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:17.616663    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:17.616663    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:17.616663    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:17.616663    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:17 GMT
	I0317 12:09:17.616663    9924 round_trippers.go:587]     Audit-Id: ff01468e-8a2b-49a0-a3dc-02c6b82418e7
	I0317 12:09:17.617653    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bb 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 37 38 31 31 30 30  12 00 1a 00 22 00 2a 24  |e-781100....".*$|
		00000030  61 61 65 38 30 62 63 35  2d 34 33 30 37 2d 34 31  |aae80bc5-4307-41|
		00000040  31 37 2d 39 37 35 37 2d  35 32 62 61 38 31 65 64  |17-9757-52ba81ed|
		00000050  65 37 33 39 32 03 33 32  36 38 00 42 08 08 e2 a1  |e7392.3268.B....|
		00000060  e0 be 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20924 chars]
	 >
	I0317 12:09:17.617653    9924 node_ready.go:53] node "multinode-781100" has status "Ready":"False"
	I0317 12:09:18.113891    9924 type.go:168] "Request Body" body=""
	I0317 12:09:18.113891    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100
	I0317 12:09:18.113891    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:18.113891    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:18.113891    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:18.119627    9924 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 12:09:18.119627    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:18.119752    9924 round_trippers.go:587]     Audit-Id: 0f28fd5f-fdd5-43fd-9a00-8d5f63296b3e
	I0317 12:09:18.119752    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:18.119752    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:18.119752    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:18.119752    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:18.119752    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:18 GMT
	I0317 12:09:18.120106    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bb 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 37 38 31 31 30 30  12 00 1a 00 22 00 2a 24  |e-781100....".*$|
		00000030  61 61 65 38 30 62 63 35  2d 34 33 30 37 2d 34 31  |aae80bc5-4307-41|
		00000040  31 37 2d 39 37 35 37 2d  35 32 62 61 38 31 65 64  |17-9757-52ba81ed|
		00000050  65 37 33 39 32 03 33 32  36 38 00 42 08 08 e2 a1  |e7392.3268.B....|
		00000060  e0 be 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20924 chars]
	 >
	I0317 12:09:18.613308    9924 type.go:168] "Request Body" body=""
	I0317 12:09:18.613308    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100
	I0317 12:09:18.613308    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:18.613308    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:18.613308    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:18.618320    9924 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 12:09:18.618320    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:18.618320    9924 round_trippers.go:587]     Audit-Id: 562e7229-a349-44c8-afb6-5226e55d9f59
	I0317 12:09:18.618320    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:18.618320    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:18.618320    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:18.618320    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:18.618320    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:18 GMT
	I0317 12:09:18.618320    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bb 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 37 38 31 31 30 30  12 00 1a 00 22 00 2a 24  |e-781100....".*$|
		00000030  61 61 65 38 30 62 63 35  2d 34 33 30 37 2d 34 31  |aae80bc5-4307-41|
		00000040  31 37 2d 39 37 35 37 2d  35 32 62 61 38 31 65 64  |17-9757-52ba81ed|
		00000050  65 37 33 39 32 03 33 32  36 38 00 42 08 08 e2 a1  |e7392.3268.B....|
		00000060  e0 be 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20924 chars]
	 >
	I0317 12:09:19.112998    9924 type.go:168] "Request Body" body=""
	I0317 12:09:19.112998    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100
	I0317 12:09:19.112998    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:19.112998    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:19.112998    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:19.117513    9924 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 12:09:19.117632    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:19.117632    9924 round_trippers.go:587]     Audit-Id: 19c37bb6-f03b-4455-b9fd-1d649c9806b9
	I0317 12:09:19.117632    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:19.117632    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:19.117632    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:19.117632    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:19.117632    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:19 GMT
	I0317 12:09:19.118070    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bb 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 37 38 31 31 30 30  12 00 1a 00 22 00 2a 24  |e-781100....".*$|
		00000030  61 61 65 38 30 62 63 35  2d 34 33 30 37 2d 34 31  |aae80bc5-4307-41|
		00000040  31 37 2d 39 37 35 37 2d  35 32 62 61 38 31 65 64  |17-9757-52ba81ed|
		00000050  65 37 33 39 32 03 33 32  36 38 00 42 08 08 e2 a1  |e7392.3268.B....|
		00000060  e0 be 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20924 chars]
	 >
	I0317 12:09:19.612920    9924 type.go:168] "Request Body" body=""
	I0317 12:09:19.613279    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100
	I0317 12:09:19.613279    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:19.613380    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:19.613380    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:19.616775    9924 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0317 12:09:19.616775    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:19.616879    9924 round_trippers.go:587]     Audit-Id: d2f72fc8-aa34-4f06-8b40-a7714865a047
	I0317 12:09:19.616879    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:19.616879    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:19.616879    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:19.616879    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:19.616879    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:19 GMT
	I0317 12:09:19.617187    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bb 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 37 38 31 31 30 30  12 00 1a 00 22 00 2a 24  |e-781100....".*$|
		00000030  61 61 65 38 30 62 63 35  2d 34 33 30 37 2d 34 31  |aae80bc5-4307-41|
		00000040  31 37 2d 39 37 35 37 2d  35 32 62 61 38 31 65 64  |17-9757-52ba81ed|
		00000050  65 37 33 39 32 03 33 32  36 38 00 42 08 08 e2 a1  |e7392.3268.B....|
		00000060  e0 be 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20924 chars]
	 >
	I0317 12:09:19.617405    9924 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:09:19.617792    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:09:19.617792    9924 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0317 12:09:19.617904    9924 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0317 12:09:19.617904    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100 ).state
	I0317 12:09:19.637440    9924 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:09:19.637440    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:09:19.637440    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100 ).networkadapters[0]).ipaddresses[0]
	I0317 12:09:20.113016    9924 type.go:168] "Request Body" body=""
	I0317 12:09:20.113016    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100
	I0317 12:09:20.113016    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:20.113016    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:20.113016    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:20.118480    9924 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 12:09:20.118480    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:20.118480    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:20.118480    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:20.118480    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:20 GMT
	I0317 12:09:20.118480    9924 round_trippers.go:587]     Audit-Id: f4b8f840-d5db-489b-b5a5-671569ecea71
	I0317 12:09:20.118480    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:20.118480    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:20.119023    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bb 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 37 38 31 31 30 30  12 00 1a 00 22 00 2a 24  |e-781100....".*$|
		00000030  61 61 65 38 30 62 63 35  2d 34 33 30 37 2d 34 31  |aae80bc5-4307-41|
		00000040  31 37 2d 39 37 35 37 2d  35 32 62 61 38 31 65 64  |17-9757-52ba81ed|
		00000050  65 37 33 39 32 03 33 32  36 38 00 42 08 08 e2 a1  |e7392.3268.B....|
		00000060  e0 be 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20924 chars]
	 >
	I0317 12:09:20.119164    9924 node_ready.go:53] node "multinode-781100" has status "Ready":"False"
	I0317 12:09:20.613021    9924 type.go:168] "Request Body" body=""
	I0317 12:09:20.613021    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100
	I0317 12:09:20.613021    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:20.613021    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:20.613021    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:20.618898    9924 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 12:09:20.618898    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:20.618898    9924 round_trippers.go:587]     Audit-Id: d1c9d59f-0781-4049-9853-da81ef882623
	I0317 12:09:20.618898    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:20.618898    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:20.618898    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:20.618898    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:20.618898    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:20 GMT
	I0317 12:09:20.619174    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bb 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 37 38 31 31 30 30  12 00 1a 00 22 00 2a 24  |e-781100....".*$|
		00000030  61 61 65 38 30 62 63 35  2d 34 33 30 37 2d 34 31  |aae80bc5-4307-41|
		00000040  31 37 2d 39 37 35 37 2d  35 32 62 61 38 31 65 64  |17-9757-52ba81ed|
		00000050  65 37 33 39 32 03 33 32  36 38 00 42 08 08 e2 a1  |e7392.3268.B....|
		00000060  e0 be 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20924 chars]
	 >
	I0317 12:09:21.113131    9924 type.go:168] "Request Body" body=""
	I0317 12:09:21.113131    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100
	I0317 12:09:21.113131    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:21.113131    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:21.113131    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:21.117020    9924 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0317 12:09:21.117144    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:21.117168    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:21.117168    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:21.117168    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:21.117168    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:21.117168    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:21 GMT
	I0317 12:09:21.117168    9924 round_trippers.go:587]     Audit-Id: 23437549-3507-42ff-922e-4cf95c989103
	I0317 12:09:21.117893    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bb 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 37 38 31 31 30 30  12 00 1a 00 22 00 2a 24  |e-781100....".*$|
		00000030  61 61 65 38 30 62 63 35  2d 34 33 30 37 2d 34 31  |aae80bc5-4307-41|
		00000040  31 37 2d 39 37 35 37 2d  35 32 62 61 38 31 65 64  |17-9757-52ba81ed|
		00000050  65 37 33 39 32 03 33 32  36 38 00 42 08 08 e2 a1  |e7392.3268.B....|
		00000060  e0 be 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20924 chars]
	 >
	I0317 12:09:21.612681    9924 type.go:168] "Request Body" body=""
	I0317 12:09:21.612681    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100
	I0317 12:09:21.612681    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:21.612681    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:21.612681    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:21.616782    9924 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 12:09:21.616782    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:21.616782    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:21.616782    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:21.616886    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:21.616886    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:21 GMT
	I0317 12:09:21.616886    9924 round_trippers.go:587]     Audit-Id: 0ef2483a-667e-45c1-a89b-f7a29c2fff52
	I0317 12:09:21.616886    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:21.617539    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bb 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 37 38 31 31 30 30  12 00 1a 00 22 00 2a 24  |e-781100....".*$|
		00000030  61 61 65 38 30 62 63 35  2d 34 33 30 37 2d 34 31  |aae80bc5-4307-41|
		00000040  31 37 2d 39 37 35 37 2d  35 32 62 61 38 31 65 64  |17-9757-52ba81ed|
		00000050  65 37 33 39 32 03 33 32  36 38 00 42 08 08 e2 a1  |e7392.3268.B....|
		00000060  e0 be 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20924 chars]
	 >
	I0317 12:09:21.968125    9924 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:09:21.968290    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:09:21.968374    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100 ).networkadapters[0]).ipaddresses[0]
	I0317 12:09:22.113904    9924 type.go:168] "Request Body" body=""
	I0317 12:09:22.113904    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100
	I0317 12:09:22.114128    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:22.114184    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:22.114184    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:22.118152    9924 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0317 12:09:22.118152    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:22.118254    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:22.118254    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:22.118254    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:22.118254    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:22 GMT
	I0317 12:09:22.118254    9924 round_trippers.go:587]     Audit-Id: bf14fd0c-983e-4e45-a531-18a642ab66c5
	I0317 12:09:22.118254    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:22.118812    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bb 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 37 38 31 31 30 30  12 00 1a 00 22 00 2a 24  |e-781100....".*$|
		00000030  61 61 65 38 30 62 63 35  2d 34 33 30 37 2d 34 31  |aae80bc5-4307-41|
		00000040  31 37 2d 39 37 35 37 2d  35 32 62 61 38 31 65 64  |17-9757-52ba81ed|
		00000050  65 37 33 39 32 03 33 32  36 38 00 42 08 08 e2 a1  |e7392.3268.B....|
		00000060  e0 be 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20924 chars]
	 >
	I0317 12:09:22.394300    9924 main.go:141] libmachine: [stdout =====>] : 172.25.16.124
	
	I0317 12:09:22.394353    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:09:22.394353    9924 sshutil.go:53] new ssh client: &{IP:172.25.16.124 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-781100\id_rsa Username:docker}
	I0317 12:09:22.536625    9924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0317 12:09:22.613495    9924 type.go:168] "Request Body" body=""
	I0317 12:09:22.613495    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100
	I0317 12:09:22.613495    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:22.613495    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:22.613495    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:22.617819    9924 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 12:09:22.618013    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:22.618013    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:22.618013    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:22 GMT
	I0317 12:09:22.618013    9924 round_trippers.go:587]     Audit-Id: ee8b16f2-6639-41d6-9a3d-aa3f86c6574e
	I0317 12:09:22.618013    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:22.618087    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:22.618087    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:22.618400    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bb 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 37 38 31 31 30 30  12 00 1a 00 22 00 2a 24  |e-781100....".*$|
		00000030  61 61 65 38 30 62 63 35  2d 34 33 30 37 2d 34 31  |aae80bc5-4307-41|
		00000040  31 37 2d 39 37 35 37 2d  35 32 62 61 38 31 65 64  |17-9757-52ba81ed|
		00000050  65 37 33 39 32 03 33 32  36 38 00 42 08 08 e2 a1  |e7392.3268.B....|
		00000060  e0 be 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20924 chars]
	 >
	I0317 12:09:22.618588    9924 node_ready.go:53] node "multinode-781100" has status "Ready":"False"
	I0317 12:09:23.113646    9924 type.go:168] "Request Body" body=""
	I0317 12:09:23.114136    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100
	I0317 12:09:23.114260    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:23.114321    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:23.114321    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:23.231409    9924 round_trippers.go:581] Response Status: 200 OK in 117 milliseconds
	I0317 12:09:23.231409    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:23.231409    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:23.231409    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:23 GMT
	I0317 12:09:23.231409    9924 round_trippers.go:587]     Audit-Id: 01342b56-daf1-4bb0-9e83-be5973936c70
	I0317 12:09:23.231409    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:23.231409    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:23.231571    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:23.231970    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bb 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 37 38 31 31 30 30  12 00 1a 00 22 00 2a 24  |e-781100....".*$|
		00000030  61 61 65 38 30 62 63 35  2d 34 33 30 37 2d 34 31  |aae80bc5-4307-41|
		00000040  31 37 2d 39 37 35 37 2d  35 32 62 61 38 31 65 64  |17-9757-52ba81ed|
		00000050  65 37 33 39 32 03 33 32  36 38 00 42 08 08 e2 a1  |e7392.3268.B....|
		00000060  e0 be 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20924 chars]
	 >
	I0317 12:09:23.394835    9924 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0317 12:09:23.395597    9924 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0317 12:09:23.395666    9924 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0317 12:09:23.395666    9924 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0317 12:09:23.395666    9924 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0317 12:09:23.395666    9924 command_runner.go:130] > pod/storage-provisioner created
	I0317 12:09:23.612836    9924 type.go:168] "Request Body" body=""
	I0317 12:09:23.612836    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100
	I0317 12:09:23.612836    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:23.612836    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:23.612836    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:23.617675    9924 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 12:09:23.617822    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:23.617822    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:23.617822    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:23.617822    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:23.617822    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:23.617822    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:23 GMT
	I0317 12:09:23.617822    9924 round_trippers.go:587]     Audit-Id: d7ec9562-6ad4-4c73-bc7e-8b121ca2f59c
	I0317 12:09:23.618279    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bb 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 37 38 31 31 30 30  12 00 1a 00 22 00 2a 24  |e-781100....".*$|
		00000030  61 61 65 38 30 62 63 35  2d 34 33 30 37 2d 34 31  |aae80bc5-4307-41|
		00000040  31 37 2d 39 37 35 37 2d  35 32 62 61 38 31 65 64  |17-9757-52ba81ed|
		00000050  65 37 33 39 32 03 33 32  36 38 00 42 08 08 e2 a1  |e7392.3268.B....|
		00000060  e0 be 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20924 chars]
	 >
	I0317 12:09:24.113322    9924 type.go:168] "Request Body" body=""
	I0317 12:09:24.113322    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100
	I0317 12:09:24.113322    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:24.113322    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:24.113322    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:24.116750    9924 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0317 12:09:24.116750    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:24.117693    9924 round_trippers.go:587]     Audit-Id: d5bfe2cf-3a82-4b81-9c6f-53d5b5f17f8d
	I0317 12:09:24.117693    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:24.117693    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:24.117693    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:24.117693    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:24.117693    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:24 GMT
	I0317 12:09:24.118161    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bb 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 37 38 31 31 30 30  12 00 1a 00 22 00 2a 24  |e-781100....".*$|
		00000030  61 61 65 38 30 62 63 35  2d 34 33 30 37 2d 34 31  |aae80bc5-4307-41|
		00000040  31 37 2d 39 37 35 37 2d  35 32 62 61 38 31 65 64  |17-9757-52ba81ed|
		00000050  65 37 33 39 32 03 33 32  36 38 00 42 08 08 e2 a1  |e7392.3268.B....|
		00000060  e0 be 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20924 chars]
	 >
	I0317 12:09:24.593457    9924 main.go:141] libmachine: [stdout =====>] : 172.25.16.124
	
	I0317 12:09:24.594067    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:09:24.594067    9924 sshutil.go:53] new ssh client: &{IP:172.25.16.124 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-781100\id_rsa Username:docker}
	I0317 12:09:24.612666    9924 type.go:168] "Request Body" body=""
	I0317 12:09:24.612666    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100
	I0317 12:09:24.613198    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:24.613198    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:24.613198    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:24.619169    9924 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 12:09:24.619169    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:24.619169    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:24.619169    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:24 GMT
	I0317 12:09:24.619169    9924 round_trippers.go:587]     Audit-Id: 17e703da-96f2-4273-8ed2-68b29052fbda
	I0317 12:09:24.619169    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:24.619169    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:24.619169    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:24.619169    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bb 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 37 38 31 31 30 30  12 00 1a 00 22 00 2a 24  |e-781100....".*$|
		00000030  61 61 65 38 30 62 63 35  2d 34 33 30 37 2d 34 31  |aae80bc5-4307-41|
		00000040  31 37 2d 39 37 35 37 2d  35 32 62 61 38 31 65 64  |17-9757-52ba81ed|
		00000050  65 37 33 39 32 03 33 32  36 38 00 42 08 08 e2 a1  |e7392.3268.B....|
		00000060  e0 be 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20924 chars]
	 >
	I0317 12:09:24.619758    9924 node_ready.go:53] node "multinode-781100" has status "Ready":"False"
	I0317 12:09:24.743395    9924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0317 12:09:24.887707    9924 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0317 12:09:24.888072    9924 type.go:204] "Request Body" body=""
	I0317 12:09:24.888218    9924 round_trippers.go:470] GET https://172.25.16.124:8443/apis/storage.k8s.io/v1/storageclasses
	I0317 12:09:24.888218    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:24.888267    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:24.888267    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:24.892645    9924 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 12:09:24.892749    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:24.892749    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:24.892749    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:24.892749    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:24.892749    9924 round_trippers.go:587]     Content-Length: 957
	I0317 12:09:24.892749    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:24 GMT
	I0317 12:09:24.892749    9924 round_trippers.go:587]     Audit-Id: 1374058e-ce85-4181-93c3-b9730f3d2117
	I0317 12:09:24.892749    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:24.892961    9924 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 25 0a 11  73 74 6f 72 61 67 65 2e  |k8s..%..storage.|
		00000010  6b 38 73 2e 69 6f 2f 76  31 12 10 53 74 6f 72 61  |k8s.io/v1..Stora|
		00000020  67 65 43 6c 61 73 73 4c  69 73 74 12 8b 07 0a 09  |geClassList.....|
		00000030  0a 00 12 03 34 32 30 1a  00 12 fd 06 0a cd 06 0a  |....420.........|
		00000040  08 73 74 61 6e 64 61 72  64 12 00 1a 00 22 00 2a  |.standard....".*|
		00000050  24 61 36 32 62 39 65 38  33 2d 30 39 33 63 2d 34  |$a62b9e83-093c-4|
		00000060  32 34 35 2d 38 31 33 65  2d 61 65 36 61 63 62 35  |245-813e-ae6acb5|
		00000070  32 62 38 32 30 32 03 34  32 30 38 00 42 08 08 f4  |2b8202.4208.B...|
		00000080  a1 e0 be 06 10 00 5a 2f  0a 1f 61 64 64 6f 6e 6d  |......Z/..addonm|
		00000090  61 6e 61 67 65 72 2e 6b  75 62 65 72 6e 65 74 65  |anager.kubernete|
		000000a0  73 2e 69 6f 2f 6d 6f 64  65 12 0c 45 6e 73 75 72  |s.io/mode..Ensur|
		000000b0  65 45 78 69 73 74 73 62  b7 02 0a 30 6b 75 62 65  |eExistsb...0kube|
		000000c0  63 74 6c 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |ctl.kubernetes. [truncated 3713 chars]
	 >
	I0317 12:09:24.893311    9924 type.go:267] "Request Body" body=<
		00000000  6b 38 73 00 0a 21 0a 11  73 74 6f 72 61 67 65 2e  |k8s..!..storage.|
		00000010  6b 38 73 2e 69 6f 2f 76  31 12 0c 53 74 6f 72 61  |k8s.io/v1..Stora|
		00000020  67 65 43 6c 61 73 73 12  fd 06 0a cd 06 0a 08 73  |geClass........s|
		00000030  74 61 6e 64 61 72 64 12  00 1a 00 22 00 2a 24 61  |tandard....".*$a|
		00000040  36 32 62 39 65 38 33 2d  30 39 33 63 2d 34 32 34  |62b9e83-093c-424|
		00000050  35 2d 38 31 33 65 2d 61  65 36 61 63 62 35 32 62  |5-813e-ae6acb52b|
		00000060  38 32 30 32 03 34 32 30  38 00 42 08 08 f4 a1 e0  |8202.4208.B.....|
		00000070  be 06 10 00 5a 2f 0a 1f  61 64 64 6f 6e 6d 61 6e  |....Z/..addonman|
		00000080  61 67 65 72 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |ager.kubernetes.|
		00000090  69 6f 2f 6d 6f 64 65 12  0c 45 6e 73 75 72 65 45  |io/mode..EnsureE|
		000000a0  78 69 73 74 73 62 b7 02  0a 30 6b 75 62 65 63 74  |xistsb...0kubect|
		000000b0  6c 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |l.kubernetes.io/|
		000000c0  6c 61 73 74 2d 61 70 70  6c 69 65 64 2d 63 6f 6e  |last-applied-co [truncated 3632 chars]
	 >
	I0317 12:09:24.893338    9924 round_trippers.go:470] PUT https://172.25.16.124:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0317 12:09:24.893338    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:24.893338    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:24.893338    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:24.893338    9924 round_trippers.go:480]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:24.905115    9924 round_trippers.go:581] Response Status: 200 OK in 11 milliseconds
	I0317 12:09:24.905115    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:24.905115    9924 round_trippers.go:587]     Audit-Id: 45f43cbb-15d8-45fe-82eb-5199ec507bde
	I0317 12:09:24.905115    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:24.905115    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:24.905115    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:24.905115    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:24.905115    9924 round_trippers.go:587]     Content-Length: 939
	I0317 12:09:24.905115    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:24 GMT
	I0317 12:09:24.905115    9924 type.go:267] "Response Body" body=<
		00000000  6b 38 73 00 0a 21 0a 11  73 74 6f 72 61 67 65 2e  |k8s..!..storage.|
		00000010  6b 38 73 2e 69 6f 2f 76  31 12 0c 53 74 6f 72 61  |k8s.io/v1..Stora|
		00000020  67 65 43 6c 61 73 73 12  fd 06 0a cd 06 0a 08 73  |geClass........s|
		00000030  74 61 6e 64 61 72 64 12  00 1a 00 22 00 2a 24 61  |tandard....".*$a|
		00000040  36 32 62 39 65 38 33 2d  30 39 33 63 2d 34 32 34  |62b9e83-093c-424|
		00000050  35 2d 38 31 33 65 2d 61  65 36 61 63 62 35 32 62  |5-813e-ae6acb52b|
		00000060  38 32 30 32 03 34 32 30  38 00 42 08 08 f4 a1 e0  |8202.4208.B.....|
		00000070  be 06 10 00 5a 2f 0a 1f  61 64 64 6f 6e 6d 61 6e  |....Z/..addonman|
		00000080  61 67 65 72 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |ager.kubernetes.|
		00000090  69 6f 2f 6d 6f 64 65 12  0c 45 6e 73 75 72 65 45  |io/mode..EnsureE|
		000000a0  78 69 73 74 73 62 b7 02  0a 30 6b 75 62 65 63 74  |xistsb...0kubect|
		000000b0  6c 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |l.kubernetes.io/|
		000000c0  6c 61 73 74 2d 61 70 70  6c 69 65 64 2d 63 6f 6e  |last-applied-co [truncated 3632 chars]
	 >
	I0317 12:09:24.911100    9924 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0317 12:09:24.917095    9924 addons.go:514] duration metric: took 10.0842718s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0317 12:09:25.112708    9924 type.go:168] "Request Body" body=""
	I0317 12:09:25.112708    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100
	I0317 12:09:25.112708    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:25.112708    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:25.112708    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:25.117358    9924 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 12:09:25.117469    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:25.117469    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:25.117469    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:25.117469    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:25.117469    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:25.117469    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:25 GMT
	I0317 12:09:25.117469    9924 round_trippers.go:587]     Audit-Id: f4be23ba-bed7-4ad1-a470-387265288a60
	I0317 12:09:25.117610    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bb 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 37 38 31 31 30 30  12 00 1a 00 22 00 2a 24  |e-781100....".*$|
		00000030  61 61 65 38 30 62 63 35  2d 34 33 30 37 2d 34 31  |aae80bc5-4307-41|
		00000040  31 37 2d 39 37 35 37 2d  35 32 62 61 38 31 65 64  |17-9757-52ba81ed|
		00000050  65 37 33 39 32 03 33 32  36 38 00 42 08 08 e2 a1  |e7392.3268.B....|
		00000060  e0 be 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20924 chars]
	 >
	I0317 12:09:25.613494    9924 type.go:168] "Request Body" body=""
	I0317 12:09:25.613914    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100
	I0317 12:09:25.613914    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:25.613914    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:25.614019    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:25.617974    9924 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0317 12:09:25.618099    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:25.618099    9924 round_trippers.go:587]     Audit-Id: 6f2be52e-8043-4419-af96-30045279026d
	I0317 12:09:25.618099    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:25.618099    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:25.618099    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:25.618099    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:25.618099    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:25 GMT
	I0317 12:09:25.618322    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bb 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 37 38 31 31 30 30  12 00 1a 00 22 00 2a 24  |e-781100....".*$|
		00000030  61 61 65 38 30 62 63 35  2d 34 33 30 37 2d 34 31  |aae80bc5-4307-41|
		00000040  31 37 2d 39 37 35 37 2d  35 32 62 61 38 31 65 64  |17-9757-52ba81ed|
		00000050  65 37 33 39 32 03 33 32  36 38 00 42 08 08 e2 a1  |e7392.3268.B....|
		00000060  e0 be 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20924 chars]
	 >
	I0317 12:09:26.112696    9924 type.go:168] "Request Body" body=""
	I0317 12:09:26.112696    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100
	I0317 12:09:26.112696    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:26.112696    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:26.112696    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:26.117099    9924 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 12:09:26.117099    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:26.117226    9924 round_trippers.go:587]     Audit-Id: f13fb88d-e0c4-4c90-9db8-aafd5f37dbb4
	I0317 12:09:26.117226    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:26.117226    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:26.117226    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:26.117226    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:26.117226    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:26 GMT
	I0317 12:09:26.117801    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bb 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 37 38 31 31 30 30  12 00 1a 00 22 00 2a 24  |e-781100....".*$|
		00000030  61 61 65 38 30 62 63 35  2d 34 33 30 37 2d 34 31  |aae80bc5-4307-41|
		00000040  31 37 2d 39 37 35 37 2d  35 32 62 61 38 31 65 64  |17-9757-52ba81ed|
		00000050  65 37 33 39 32 03 33 32  36 38 00 42 08 08 e2 a1  |e7392.3268.B....|
		00000060  e0 be 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20924 chars]
	 >
	I0317 12:09:26.614443    9924 type.go:168] "Request Body" body=""
	I0317 12:09:26.614443    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100
	I0317 12:09:26.614443    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:26.614443    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:26.614443    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:26.619105    9924 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 12:09:26.619201    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:26.619201    9924 round_trippers.go:587]     Audit-Id: 65c636b2-6178-4a2d-b49b-3443ec347255
	I0317 12:09:26.619201    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:26.619201    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:26.619201    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:26.619201    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:26.619201    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:26 GMT
	I0317 12:09:26.619551    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bb 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 37 38 31 31 30 30  12 00 1a 00 22 00 2a 24  |e-781100....".*$|
		00000030  61 61 65 38 30 62 63 35  2d 34 33 30 37 2d 34 31  |aae80bc5-4307-41|
		00000040  31 37 2d 39 37 35 37 2d  35 32 62 61 38 31 65 64  |17-9757-52ba81ed|
		00000050  65 37 33 39 32 03 33 32  36 38 00 42 08 08 e2 a1  |e7392.3268.B....|
		00000060  e0 be 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20924 chars]
	 >
	I0317 12:09:26.619925    9924 node_ready.go:53] node "multinode-781100" has status "Ready":"False"
	I0317 12:09:27.112757    9924 type.go:168] "Request Body" body=""
	I0317 12:09:27.112757    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100
	I0317 12:09:27.112757    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:27.112757    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:27.112757    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:27.117144    9924 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 12:09:27.117629    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:27.117629    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:27.117629    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:27.117629    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:27 GMT
	I0317 12:09:27.117629    9924 round_trippers.go:587]     Audit-Id: e71f7ad1-42f4-4cd0-8a7d-48d2095ed514
	I0317 12:09:27.117629    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:27.117629    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:27.118144    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bb 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 37 38 31 31 30 30  12 00 1a 00 22 00 2a 24  |e-781100....".*$|
		00000030  61 61 65 38 30 62 63 35  2d 34 33 30 37 2d 34 31  |aae80bc5-4307-41|
		00000040  31 37 2d 39 37 35 37 2d  35 32 62 61 38 31 65 64  |17-9757-52ba81ed|
		00000050  65 37 33 39 32 03 33 32  36 38 00 42 08 08 e2 a1  |e7392.3268.B....|
		00000060  e0 be 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20924 chars]
	 >
	I0317 12:09:27.613347    9924 type.go:168] "Request Body" body=""
	I0317 12:09:27.613507    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100
	I0317 12:09:27.613507    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:27.613507    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:27.613507    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:27.617356    9924 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0317 12:09:27.618288    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:27.618288    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:27.618288    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:27.618288    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:27.618357    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:27.618357    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:27 GMT
	I0317 12:09:27.618357    9924 round_trippers.go:587]     Audit-Id: 63c7d592-a7f2-49d9-9211-34542da8750c
	I0317 12:09:27.619002    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bb 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 37 38 31 31 30 30  12 00 1a 00 22 00 2a 24  |e-781100....".*$|
		00000030  61 61 65 38 30 62 63 35  2d 34 33 30 37 2d 34 31  |aae80bc5-4307-41|
		00000040  31 37 2d 39 37 35 37 2d  35 32 62 61 38 31 65 64  |17-9757-52ba81ed|
		00000050  65 37 33 39 32 03 33 32  36 38 00 42 08 08 e2 a1  |e7392.3268.B....|
		00000060  e0 be 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20924 chars]
	 >
	I0317 12:09:28.112785    9924 type.go:168] "Request Body" body=""
	I0317 12:09:28.112785    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100
	I0317 12:09:28.112785    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:28.112785    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:28.112785    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:28.117862    9924 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 12:09:28.117862    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:28.117862    9924 round_trippers.go:587]     Audit-Id: 145c9ac5-61f2-4621-9c2d-76e70f6fd67f
	I0317 12:09:28.117862    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:28.117958    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:28.117958    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:28.117958    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:28.117958    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:28 GMT
	I0317 12:09:28.118945    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bb 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 37 38 31 31 30 30  12 00 1a 00 22 00 2a 24  |e-781100....".*$|
		00000030  61 61 65 38 30 62 63 35  2d 34 33 30 37 2d 34 31  |aae80bc5-4307-41|
		00000040  31 37 2d 39 37 35 37 2d  35 32 62 61 38 31 65 64  |17-9757-52ba81ed|
		00000050  65 37 33 39 32 03 33 32  36 38 00 42 08 08 e2 a1  |e7392.3268.B....|
		00000060  e0 be 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20924 chars]
	 >
	I0317 12:09:28.613409    9924 type.go:168] "Request Body" body=""
	I0317 12:09:28.613941    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100
	I0317 12:09:28.614008    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:28.614008    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:28.614041    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:28.619855    9924 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 12:09:28.619855    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:28.619855    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:28.619855    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:28.619951    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:28 GMT
	I0317 12:09:28.619951    9924 round_trippers.go:587]     Audit-Id: fa91995f-0657-405a-a41a-a4f61fba1103
	I0317 12:09:28.619951    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:28.619951    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:28.620418    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bb 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 37 38 31 31 30 30  12 00 1a 00 22 00 2a 24  |e-781100....".*$|
		00000030  61 61 65 38 30 62 63 35  2d 34 33 30 37 2d 34 31  |aae80bc5-4307-41|
		00000040  31 37 2d 39 37 35 37 2d  35 32 62 61 38 31 65 64  |17-9757-52ba81ed|
		00000050  65 37 33 39 32 03 33 32  36 38 00 42 08 08 e2 a1  |e7392.3268.B....|
		00000060  e0 be 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20924 chars]
	 >
	I0317 12:09:28.620520    9924 node_ready.go:53] node "multinode-781100" has status "Ready":"False"
	I0317 12:09:29.114089    9924 type.go:168] "Request Body" body=""
	I0317 12:09:29.114158    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100
	I0317 12:09:29.114158    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:29.114248    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:29.114248    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:29.117915    9924 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0317 12:09:29.117915    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:29.117989    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:29.117989    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:29 GMT
	I0317 12:09:29.118006    9924 round_trippers.go:587]     Audit-Id: 05b0bf71-ca31-4c2f-803e-c2daf15444b9
	I0317 12:09:29.118006    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:29.118006    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:29.118006    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:29.118821    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bb 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 37 38 31 31 30 30  12 00 1a 00 22 00 2a 24  |e-781100....".*$|
		00000030  61 61 65 38 30 62 63 35  2d 34 33 30 37 2d 34 31  |aae80bc5-4307-41|
		00000040  31 37 2d 39 37 35 37 2d  35 32 62 61 38 31 65 64  |17-9757-52ba81ed|
		00000050  65 37 33 39 32 03 33 32  36 38 00 42 08 08 e2 a1  |e7392.3268.B....|
		00000060  e0 be 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20924 chars]
	 >
	I0317 12:09:29.613344    9924 type.go:168] "Request Body" body=""
	I0317 12:09:29.613717    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100
	I0317 12:09:29.613717    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:29.613717    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:29.613717    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:29.617607    9924 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0317 12:09:29.618322    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:29.618386    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:29.618386    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:29.618386    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:29 GMT
	I0317 12:09:29.618419    9924 round_trippers.go:587]     Audit-Id: b82c0954-9094-43e9-8f9a-3244bf4e4ff2
	I0317 12:09:29.618419    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:29.618419    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:29.618419    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bb 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 37 38 31 31 30 30  12 00 1a 00 22 00 2a 24  |e-781100....".*$|
		00000030  61 61 65 38 30 62 63 35  2d 34 33 30 37 2d 34 31  |aae80bc5-4307-41|
		00000040  31 37 2d 39 37 35 37 2d  35 32 62 61 38 31 65 64  |17-9757-52ba81ed|
		00000050  65 37 33 39 32 03 33 32  36 38 00 42 08 08 e2 a1  |e7392.3268.B....|
		00000060  e0 be 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20924 chars]
	 >
	I0317 12:09:30.113249    9924 type.go:168] "Request Body" body=""
	I0317 12:09:30.113249    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100
	I0317 12:09:30.113249    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:30.113249    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:30.113249    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:30.118036    9924 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 12:09:30.118106    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:30.118106    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:30.118106    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:30.118106    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:30.118106    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:30 GMT
	I0317 12:09:30.118106    9924 round_trippers.go:587]     Audit-Id: 7934534c-4fd2-48d8-95a6-79f1685e0fca
	I0317 12:09:30.118106    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:30.118106    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bb 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 37 38 31 31 30 30  12 00 1a 00 22 00 2a 24  |e-781100....".*$|
		00000030  61 61 65 38 30 62 63 35  2d 34 33 30 37 2d 34 31  |aae80bc5-4307-41|
		00000040  31 37 2d 39 37 35 37 2d  35 32 62 61 38 31 65 64  |17-9757-52ba81ed|
		00000050  65 37 33 39 32 03 33 32  36 38 00 42 08 08 e2 a1  |e7392.3268.B....|
		00000060  e0 be 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20924 chars]
	 >
	I0317 12:09:30.612809    9924 type.go:168] "Request Body" body=""
	I0317 12:09:30.612809    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100
	I0317 12:09:30.612809    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:30.612809    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:30.612809    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:30.617459    9924 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 12:09:30.617521    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:30.617521    9924 round_trippers.go:587]     Audit-Id: 29c09356-1a0d-4782-954d-04bd18e97f46
	I0317 12:09:30.617521    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:30.617521    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:30.617521    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:30.617521    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:30.617521    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:30 GMT
	I0317 12:09:30.618233    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bb 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 37 38 31 31 30 30  12 00 1a 00 22 00 2a 24  |e-781100....".*$|
		00000030  61 61 65 38 30 62 63 35  2d 34 33 30 37 2d 34 31  |aae80bc5-4307-41|
		00000040  31 37 2d 39 37 35 37 2d  35 32 62 61 38 31 65 64  |17-9757-52ba81ed|
		00000050  65 37 33 39 32 03 33 32  36 38 00 42 08 08 e2 a1  |e7392.3268.B....|
		00000060  e0 be 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20924 chars]
	 >
	I0317 12:09:31.113226    9924 type.go:168] "Request Body" body=""
	I0317 12:09:31.113373    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100
	I0317 12:09:31.113373    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:31.113373    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:31.113373    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:31.117979    9924 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 12:09:31.118043    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:31.118043    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:31.118043    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:31.118043    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:31 GMT
	I0317 12:09:31.118113    9924 round_trippers.go:587]     Audit-Id: 469a14ed-4473-4918-a986-d6d74327b4db
	I0317 12:09:31.118113    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:31.118113    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:31.118697    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bb 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 37 38 31 31 30 30  12 00 1a 00 22 00 2a 24  |e-781100....".*$|
		00000030  61 61 65 38 30 62 63 35  2d 34 33 30 37 2d 34 31  |aae80bc5-4307-41|
		00000040  31 37 2d 39 37 35 37 2d  35 32 62 61 38 31 65 64  |17-9757-52ba81ed|
		00000050  65 37 33 39 32 03 33 32  36 38 00 42 08 08 e2 a1  |e7392.3268.B....|
		00000060  e0 be 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20924 chars]
	 >
	I0317 12:09:31.118966    9924 node_ready.go:53] node "multinode-781100" has status "Ready":"False"
	I0317 12:09:31.613373    9924 type.go:168] "Request Body" body=""
	I0317 12:09:31.613373    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100
	I0317 12:09:31.613373    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:31.613373    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:31.613373    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:31.618299    9924 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 12:09:31.618387    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:31.618387    9924 round_trippers.go:587]     Audit-Id: c11fab60-12b3-4c88-b463-38c90b16374e
	I0317 12:09:31.618387    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:31.618387    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:31.618387    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:31.618387    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:31.618501    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:31 GMT
	I0317 12:09:31.618920    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bb 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 37 38 31 31 30 30  12 00 1a 00 22 00 2a 24  |e-781100....".*$|
		00000030  61 61 65 38 30 62 63 35  2d 34 33 30 37 2d 34 31  |aae80bc5-4307-41|
		00000040  31 37 2d 39 37 35 37 2d  35 32 62 61 38 31 65 64  |17-9757-52ba81ed|
		00000050  65 37 33 39 32 03 33 32  36 38 00 42 08 08 e2 a1  |e7392.3268.B....|
		00000060  e0 be 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20924 chars]
	 >
	I0317 12:09:32.113405    9924 type.go:168] "Request Body" body=""
	I0317 12:09:32.113859    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100
	I0317 12:09:32.113859    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:32.113859    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:32.113859    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:32.118116    9924 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 12:09:32.118116    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:32.118116    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:32 GMT
	I0317 12:09:32.118116    9924 round_trippers.go:587]     Audit-Id: bd5221c7-6d66-483b-9004-607381b13696
	I0317 12:09:32.118116    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:32.118116    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:32.118116    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:32.118116    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:32.118729    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bb 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 37 38 31 31 30 30  12 00 1a 00 22 00 2a 24  |e-781100....".*$|
		00000030  61 61 65 38 30 62 63 35  2d 34 33 30 37 2d 34 31  |aae80bc5-4307-41|
		00000040  31 37 2d 39 37 35 37 2d  35 32 62 61 38 31 65 64  |17-9757-52ba81ed|
		00000050  65 37 33 39 32 03 33 32  36 38 00 42 08 08 e2 a1  |e7392.3268.B....|
		00000060  e0 be 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20924 chars]
	 >
	I0317 12:09:32.614271    9924 type.go:168] "Request Body" body=""
	I0317 12:09:32.614271    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100
	I0317 12:09:32.614271    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:32.614271    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:32.614271    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:32.619637    9924 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 12:09:32.619637    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:32.619693    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:32.619693    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:32.619693    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:32.619693    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:32.619693    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:32 GMT
	I0317 12:09:32.619693    9924 round_trippers.go:587]     Audit-Id: ec78434e-e25a-4967-a1a7-a9da7b00ebcf
	I0317 12:09:32.619693    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bb 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 37 38 31 31 30 30  12 00 1a 00 22 00 2a 24  |e-781100....".*$|
		00000030  61 61 65 38 30 62 63 35  2d 34 33 30 37 2d 34 31  |aae80bc5-4307-41|
		00000040  31 37 2d 39 37 35 37 2d  35 32 62 61 38 31 65 64  |17-9757-52ba81ed|
		00000050  65 37 33 39 32 03 33 32  36 38 00 42 08 08 e2 a1  |e7392.3268.B....|
		00000060  e0 be 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20924 chars]
	 >
	I0317 12:09:33.112907    9924 type.go:168] "Request Body" body=""
	I0317 12:09:33.112907    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100
	I0317 12:09:33.112907    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:33.112907    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:33.112907    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:33.118526    9924 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 12:09:33.118616    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:33.118616    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:33.118616    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:33.118616    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:33 GMT
	I0317 12:09:33.118616    9924 round_trippers.go:587]     Audit-Id: ae2d8e69-bf2f-43f7-9836-d9bfc708a6af
	I0317 12:09:33.118616    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:33.118616    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:33.119006    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bb 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 37 38 31 31 30 30  12 00 1a 00 22 00 2a 24  |e-781100....".*$|
		00000030  61 61 65 38 30 62 63 35  2d 34 33 30 37 2d 34 31  |aae80bc5-4307-41|
		00000040  31 37 2d 39 37 35 37 2d  35 32 62 61 38 31 65 64  |17-9757-52ba81ed|
		00000050  65 37 33 39 32 03 33 32  36 38 00 42 08 08 e2 a1  |e7392.3268.B....|
		00000060  e0 be 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20924 chars]
	 >
	I0317 12:09:33.119289    9924 node_ready.go:53] node "multinode-781100" has status "Ready":"False"
	I0317 12:09:33.613620    9924 type.go:168] "Request Body" body=""
	I0317 12:09:33.613620    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100
	I0317 12:09:33.613620    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:33.613620    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:33.613620    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:33.621823    9924 round_trippers.go:581] Response Status: 200 OK in 8 milliseconds
	I0317 12:09:33.621854    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:33.621854    9924 round_trippers.go:587]     Audit-Id: 5f72c442-859b-4f0e-b090-02e2d80d3e65
	I0317 12:09:33.621854    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:33.621854    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:33.621854    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:33.621854    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:33.621854    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:33 GMT
	I0317 12:09:33.621854    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bb 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 37 38 31 31 30 30  12 00 1a 00 22 00 2a 24  |e-781100....".*$|
		00000030  61 61 65 38 30 62 63 35  2d 34 33 30 37 2d 34 31  |aae80bc5-4307-41|
		00000040  31 37 2d 39 37 35 37 2d  35 32 62 61 38 31 65 64  |17-9757-52ba81ed|
		00000050  65 37 33 39 32 03 33 32  36 38 00 42 08 08 e2 a1  |e7392.3268.B....|
		00000060  e0 be 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20924 chars]
	 >
	I0317 12:09:34.112993    9924 type.go:168] "Request Body" body=""
	I0317 12:09:34.112993    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100
	I0317 12:09:34.112993    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:34.112993    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:34.112993    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:34.117595    9924 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 12:09:34.117595    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:34.117595    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:34.117595    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:34.117595    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:34 GMT
	I0317 12:09:34.117595    9924 round_trippers.go:587]     Audit-Id: f6d1e999-02c2-4d03-82c3-a84965359d88
	I0317 12:09:34.117595    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:34.117595    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:34.118216    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bb 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 37 38 31 31 30 30  12 00 1a 00 22 00 2a 24  |e-781100....".*$|
		00000030  61 61 65 38 30 62 63 35  2d 34 33 30 37 2d 34 31  |aae80bc5-4307-41|
		00000040  31 37 2d 39 37 35 37 2d  35 32 62 61 38 31 65 64  |17-9757-52ba81ed|
		00000050  65 37 33 39 32 03 33 32  36 38 00 42 08 08 e2 a1  |e7392.3268.B....|
		00000060  e0 be 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20924 chars]
	 >
	I0317 12:09:34.612784    9924 type.go:168] "Request Body" body=""
	I0317 12:09:34.613415    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100
	I0317 12:09:34.613415    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:34.613415    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:34.613415    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:34.617668    9924 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 12:09:34.617668    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:34.617668    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:34.617668    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:34.617791    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:34 GMT
	I0317 12:09:34.617791    9924 round_trippers.go:587]     Audit-Id: 43f1acf4-3320-439f-ad96-f7fbca37c276
	I0317 12:09:34.617791    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:34.617791    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:34.618192    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bb 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 37 38 31 31 30 30  12 00 1a 00 22 00 2a 24  |e-781100....".*$|
		00000030  61 61 65 38 30 62 63 35  2d 34 33 30 37 2d 34 31  |aae80bc5-4307-41|
		00000040  31 37 2d 39 37 35 37 2d  35 32 62 61 38 31 65 64  |17-9757-52ba81ed|
		00000050  65 37 33 39 32 03 33 32  36 38 00 42 08 08 e2 a1  |e7392.3268.B....|
		00000060  e0 be 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20924 chars]
	 >
	I0317 12:09:35.113349    9924 type.go:168] "Request Body" body=""
	I0317 12:09:35.113349    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100
	I0317 12:09:35.113349    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:35.113349    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:35.113349    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:35.116924    9924 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0317 12:09:35.117731    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:35.117731    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:35 GMT
	I0317 12:09:35.117731    9924 round_trippers.go:587]     Audit-Id: 8039aef8-151d-4ae5-9df8-e34807ba1eac
	I0317 12:09:35.117731    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:35.117731    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:35.117731    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:35.117731    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:35.118185    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bb 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 37 38 31 31 30 30  12 00 1a 00 22 00 2a 24  |e-781100....".*$|
		00000030  61 61 65 38 30 62 63 35  2d 34 33 30 37 2d 34 31  |aae80bc5-4307-41|
		00000040  31 37 2d 39 37 35 37 2d  35 32 62 61 38 31 65 64  |17-9757-52ba81ed|
		00000050  65 37 33 39 32 03 33 32  36 38 00 42 08 08 e2 a1  |e7392.3268.B....|
		00000060  e0 be 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20924 chars]
	 >
	I0317 12:09:35.613463    9924 type.go:168] "Request Body" body=""
	I0317 12:09:35.613902    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100
	I0317 12:09:35.613902    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:35.613902    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:35.614021    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:35.620478    9924 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 12:09:35.620478    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:35.620478    9924 round_trippers.go:587]     Audit-Id: 6c7d8232-c069-4cca-a665-748714818e12
	I0317 12:09:35.620478    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:35.620478    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:35.620478    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:35.620478    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:35.620478    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:35 GMT
	I0317 12:09:35.621062    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bb 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 37 38 31 31 30 30  12 00 1a 00 22 00 2a 24  |e-781100....".*$|
		00000030  61 61 65 38 30 62 63 35  2d 34 33 30 37 2d 34 31  |aae80bc5-4307-41|
		00000040  31 37 2d 39 37 35 37 2d  35 32 62 61 38 31 65 64  |17-9757-52ba81ed|
		00000050  65 37 33 39 32 03 33 32  36 38 00 42 08 08 e2 a1  |e7392.3268.B....|
		00000060  e0 be 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20924 chars]
	 >
	I0317 12:09:35.621345    9924 node_ready.go:53] node "multinode-781100" has status "Ready":"False"
	I0317 12:09:36.113739    9924 type.go:168] "Request Body" body=""
	I0317 12:09:36.114397    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100
	I0317 12:09:36.114397    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:36.114397    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:36.114397    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:36.118714    9924 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 12:09:36.118714    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:36.118714    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:36.118714    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:36.118714    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:36 GMT
	I0317 12:09:36.118714    9924 round_trippers.go:587]     Audit-Id: 150813fd-cf35-4c20-8365-5624d3da46b3
	I0317 12:09:36.118714    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:36.118714    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:36.119435    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bb 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 37 38 31 31 30 30  12 00 1a 00 22 00 2a 24  |e-781100....".*$|
		00000030  61 61 65 38 30 62 63 35  2d 34 33 30 37 2d 34 31  |aae80bc5-4307-41|
		00000040  31 37 2d 39 37 35 37 2d  35 32 62 61 38 31 65 64  |17-9757-52ba81ed|
		00000050  65 37 33 39 32 03 33 32  36 38 00 42 08 08 e2 a1  |e7392.3268.B....|
		00000060  e0 be 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20924 chars]
	 >
	I0317 12:09:36.613910    9924 type.go:168] "Request Body" body=""
	I0317 12:09:36.614105    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100
	I0317 12:09:36.614105    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:36.614105    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:36.614210    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:36.618693    9924 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 12:09:36.619123    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:36.619123    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:36 GMT
	I0317 12:09:36.619123    9924 round_trippers.go:587]     Audit-Id: 41f74079-f392-4221-9b53-988cee168721
	I0317 12:09:36.619123    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:36.619123    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:36.619123    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:36.619123    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:36.619637    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 c2 21 0a fc 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..!.....multinod|
		00000020  65 2d 37 38 31 31 30 30  12 00 1a 00 22 00 2a 24  |e-781100....".*$|
		00000030  61 61 65 38 30 62 63 35  2d 34 33 30 37 2d 34 31  |aae80bc5-4307-41|
		00000040  31 37 2d 39 37 35 37 2d  35 32 62 61 38 31 65 64  |17-9757-52ba81ed|
		00000050  65 37 33 39 32 03 34 32  36 38 00 42 08 08 e2 a1  |e7392.4268.B....|
		00000060  e0 be 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20299 chars]
	 >
	I0317 12:09:36.619866    9924 node_ready.go:49] node "multinode-781100" has status "Ready":"True"
	I0317 12:09:36.619866    9924 node_ready.go:38] duration metric: took 21.007124s for node "multinode-781100" to be "Ready" ...
	I0317 12:09:36.619866    9924 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 12:09:36.620023    9924 type.go:204] "Request Body" body=""
	I0317 12:09:36.620225    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/namespaces/kube-system/pods
	I0317 12:09:36.620225    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:36.620285    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:36.620285    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:36.636712    9924 round_trippers.go:581] Response Status: 200 OK in 15 milliseconds
	I0317 12:09:36.636712    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:36.636712    9924 round_trippers.go:587]     Audit-Id: 59c501ee-a874-46ce-8186-26ce83e97b4b
	I0317 12:09:36.636712    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:36.636712    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:36.636712    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:36.636712    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:36.636712    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:36 GMT
	I0317 12:09:36.639430    9924 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0d 0a 02  76 31 12 07 50 6f 64 4c  |k8s.....v1..PodL|
		00000010  69 73 74 12 ff c5 02 0a  09 0a 00 12 03 34 33 32  |ist..........432|
		00000020  1a 00 12 d7 26 0a 8b 19  0a 18 63 6f 72 65 64 6e  |....&.....coredn|
		00000030  73 2d 36 36 38 64 36 62  66 39 62 63 2d 62 38 34  |s-668d6bf9bc-b84|
		00000040  34 35 12 13 63 6f 72 65  64 6e 73 2d 36 36 38 64  |45..coredns-668d|
		00000050  36 62 66 39 62 63 2d 1a  0b 6b 75 62 65 2d 73 79  |6bf9bc-..kube-sy|
		00000060  73 74 65 6d 22 00 2a 24  31 65 66 61 30 64 62 30  |stem".*$1efa0db0|
		00000070  2d 31 33 36 61 2d 34 34  30 35 2d 38 35 65 31 2d  |-136a-4405-85e1-|
		00000080  34 64 32 61 62 63 38 39  62 36 61 31 32 03 34 33  |4d2abc89b6a12.43|
		00000090  32 38 00 42 08 08 ea a1  e0 be 06 10 00 5a 13 0a  |28.B.........Z..|
		000000a0  07 6b 38 73 2d 61 70 70  12 08 6b 75 62 65 2d 64  |.k8s-app..kube-d|
		000000b0  6e 73 5a 1f 0a 11 70 6f  64 2d 74 65 6d 70 6c 61  |nsZ...pod-templa|
		000000c0  74 65 2d 68 61 73 68 12  0a 36 36 38 64 36 62 66  |te-hash..668d6b [truncated 205160 chars]
	 >
	I0317 12:09:36.639685    9924 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-b8445" in "kube-system" namespace to be "Ready" ...
	I0317 12:09:36.639685    9924 type.go:168] "Request Body" body=""
	I0317 12:09:36.640324    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-b8445
	I0317 12:09:36.640324    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:36.640324    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:36.640324    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:36.644040    9924 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0317 12:09:36.644040    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:36.644152    9924 round_trippers.go:587]     Audit-Id: 9030256f-f8fc-4f58-ba20-c612a6b9fa78
	I0317 12:09:36.644152    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:36.644152    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:36.644152    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:36.644152    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:36.644152    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:36 GMT
	I0317 12:09:36.644440    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  d7 26 0a 8b 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.&.....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 62 38 34 34 35 12  |68d6bf9bc-b8445.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 31 65 66  61 30 64 62 30 2d 31 33  |m".*$1efa0db0-13|
		00000060  36 61 2d 34 34 30 35 2d  38 35 65 31 2d 34 64 32  |6a-4405-85e1-4d2|
		00000070  61 62 63 38 39 62 36 61  31 32 03 34 33 32 38 00  |abc89b6a12.4328.|
		00000080  42 08 08 ea a1 e0 be 06  10 00 5a 13 0a 07 6b 38  |B.........Z...k8|
		00000090  73 2d 61 70 70 12 08 6b  75 62 65 2d 64 6e 73 5a  |s-app..kube-dnsZ|
		000000a0  1f 0a 11 70 6f 64 2d 74  65 6d 70 6c 61 74 65 2d  |...pod-template-|
		000000b0  68 61 73 68 12 0a 36 36  38 64 36 62 66 39 62 63  |hash..668d6bf9bc|
		000000c0  6a 53 0a 0a 52 65 70 6c  69 63 61 53 65 74 1a 12  |jS..ReplicaSet. [truncated 23542 chars]
	 >
	I0317 12:09:36.644680    9924 type.go:168] "Request Body" body=""
	I0317 12:09:36.644746    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100
	I0317 12:09:36.644827    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:36.644845    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:36.644845    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:36.647654    9924 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0317 12:09:36.647981    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:36.647981    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:36.647981    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:36.647981    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:36 GMT
	I0317 12:09:36.647981    9924 round_trippers.go:587]     Audit-Id: d68f6f01-ebd8-4fc6-aa1c-e1041f85b53f
	I0317 12:09:36.647981    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:36.647981    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:36.649176    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 c2 21 0a fc 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..!.....multinod|
		00000020  65 2d 37 38 31 31 30 30  12 00 1a 00 22 00 2a 24  |e-781100....".*$|
		00000030  61 61 65 38 30 62 63 35  2d 34 33 30 37 2d 34 31  |aae80bc5-4307-41|
		00000040  31 37 2d 39 37 35 37 2d  35 32 62 61 38 31 65 64  |17-9757-52ba81ed|
		00000050  65 37 33 39 32 03 34 32  36 38 00 42 08 08 e2 a1  |e7392.4268.B....|
		00000060  e0 be 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20299 chars]
	 >
	I0317 12:09:37.140556    9924 type.go:168] "Request Body" body=""
	I0317 12:09:37.140783    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-b8445
	I0317 12:09:37.140812    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:37.140812    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:37.140838    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:37.145016    9924 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 12:09:37.145064    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:37.145064    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:37.145064    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:37.145064    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:37.145064    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:37.145064    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:37 GMT
	I0317 12:09:37.145064    9924 round_trippers.go:587]     Audit-Id: 7658fada-5ee2-4f1c-9457-75805da57053
	I0317 12:09:37.145217    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  d7 26 0a 8b 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.&.....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 62 38 34 34 35 12  |68d6bf9bc-b8445.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 31 65 66  61 30 64 62 30 2d 31 33  |m".*$1efa0db0-13|
		00000060  36 61 2d 34 34 30 35 2d  38 35 65 31 2d 34 64 32  |6a-4405-85e1-4d2|
		00000070  61 62 63 38 39 62 36 61  31 32 03 34 33 32 38 00  |abc89b6a12.4328.|
		00000080  42 08 08 ea a1 e0 be 06  10 00 5a 13 0a 07 6b 38  |B.........Z...k8|
		00000090  73 2d 61 70 70 12 08 6b  75 62 65 2d 64 6e 73 5a  |s-app..kube-dnsZ|
		000000a0  1f 0a 11 70 6f 64 2d 74  65 6d 70 6c 61 74 65 2d  |...pod-template-|
		000000b0  68 61 73 68 12 0a 36 36  38 64 36 62 66 39 62 63  |hash..668d6bf9bc|
		000000c0  6a 53 0a 0a 52 65 70 6c  69 63 61 53 65 74 1a 12  |jS..ReplicaSet. [truncated 23542 chars]
	 >
	I0317 12:09:37.145217    9924 type.go:168] "Request Body" body=""
	I0317 12:09:37.145217    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100
	I0317 12:09:37.145799    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:37.145799    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:37.145799    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:37.148131    9924 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0317 12:09:37.148131    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:37.148131    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:37.148131    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:37.148131    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:37.148131    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:37 GMT
	I0317 12:09:37.148131    9924 round_trippers.go:587]     Audit-Id: 93ad591a-7b00-40db-9fbd-8f61d7064613
	I0317 12:09:37.148131    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:37.148675    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 c2 21 0a fc 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..!.....multinod|
		00000020  65 2d 37 38 31 31 30 30  12 00 1a 00 22 00 2a 24  |e-781100....".*$|
		00000030  61 61 65 38 30 62 63 35  2d 34 33 30 37 2d 34 31  |aae80bc5-4307-41|
		00000040  31 37 2d 39 37 35 37 2d  35 32 62 61 38 31 65 64  |17-9757-52ba81ed|
		00000050  65 37 33 39 32 03 34 32  36 38 00 42 08 08 e2 a1  |e7392.4268.B....|
		00000060  e0 be 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20299 chars]
	 >
	I0317 12:09:37.640492    9924 type.go:168] "Request Body" body=""
	I0317 12:09:37.640926    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-b8445
	I0317 12:09:37.640926    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:37.640972    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:37.640972    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:37.651855    9924 round_trippers.go:581] Response Status: 200 OK in 10 milliseconds
	I0317 12:09:37.651855    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:37.651855    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:37.651855    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:37.651855    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:37.651855    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:37 GMT
	I0317 12:09:37.651855    9924 round_trippers.go:587]     Audit-Id: d3645658-aecd-4c91-a5c7-d2e0a29a49e9
	I0317 12:09:37.651855    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:37.652901    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  d7 26 0a 8b 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.&.....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 62 38 34 34 35 12  |68d6bf9bc-b8445.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 31 65 66  61 30 64 62 30 2d 31 33  |m".*$1efa0db0-13|
		00000060  36 61 2d 34 34 30 35 2d  38 35 65 31 2d 34 64 32  |6a-4405-85e1-4d2|
		00000070  61 62 63 38 39 62 36 61  31 32 03 34 33 32 38 00  |abc89b6a12.4328.|
		00000080  42 08 08 ea a1 e0 be 06  10 00 5a 13 0a 07 6b 38  |B.........Z...k8|
		00000090  73 2d 61 70 70 12 08 6b  75 62 65 2d 64 6e 73 5a  |s-app..kube-dnsZ|
		000000a0  1f 0a 11 70 6f 64 2d 74  65 6d 70 6c 61 74 65 2d  |...pod-template-|
		000000b0  68 61 73 68 12 0a 36 36  38 64 36 62 66 39 62 63  |hash..668d6bf9bc|
		000000c0  6a 53 0a 0a 52 65 70 6c  69 63 61 53 65 74 1a 12  |jS..ReplicaSet. [truncated 23542 chars]
	 >
	I0317 12:09:37.652901    9924 type.go:168] "Request Body" body=""
	I0317 12:09:37.652901    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100
	I0317 12:09:37.652901    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:37.652901    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:37.652901    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:37.660303    9924 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0317 12:09:37.660303    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:37.660303    9924 round_trippers.go:587]     Audit-Id: de7596db-a72a-4c3e-b009-7f05d16fa7d5
	I0317 12:09:37.660303    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:37.660303    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:37.660303    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:37.660303    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:37.660303    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:37 GMT
	I0317 12:09:37.660303    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 c2 21 0a fc 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..!.....multinod|
		00000020  65 2d 37 38 31 31 30 30  12 00 1a 00 22 00 2a 24  |e-781100....".*$|
		00000030  61 61 65 38 30 62 63 35  2d 34 33 30 37 2d 34 31  |aae80bc5-4307-41|
		00000040  31 37 2d 39 37 35 37 2d  35 32 62 61 38 31 65 64  |17-9757-52ba81ed|
		00000050  65 37 33 39 32 03 34 32  36 38 00 42 08 08 e2 a1  |e7392.4268.B....|
		00000060  e0 be 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20299 chars]
	 >
	I0317 12:09:38.139972    9924 type.go:168] "Request Body" body=""
	I0317 12:09:38.139972    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-b8445
	I0317 12:09:38.139972    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:38.139972    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:38.139972    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:38.150172    9924 round_trippers.go:581] Response Status: 200 OK in 10 milliseconds
	I0317 12:09:38.150274    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:38.150274    9924 round_trippers.go:587]     Audit-Id: aac96614-1ad6-4d85-833a-f6d22fbceb72
	I0317 12:09:38.150334    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:38.150334    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:38.150334    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:38.150334    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:38.150334    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:38 GMT
	I0317 12:09:38.150563    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  82 29 0a e8 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 62 38 34 34 35 12  |68d6bf9bc-b8445.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 31 65 66  61 30 64 62 30 2d 31 33  |m".*$1efa0db0-13|
		00000060  36 61 2d 34 34 30 35 2d  38 35 65 31 2d 34 64 32  |6a-4405-85e1-4d2|
		00000070  61 62 63 38 39 62 36 61  31 32 03 34 34 33 38 00  |abc89b6a12.4438.|
		00000080  42 08 08 ea a1 e0 be 06  10 00 5a 13 0a 07 6b 38  |B.........Z...k8|
		00000090  73 2d 61 70 70 12 08 6b  75 62 65 2d 64 6e 73 5a  |s-app..kube-dnsZ|
		000000a0  1f 0a 11 70 6f 64 2d 74  65 6d 70 6c 61 74 65 2d  |...pod-template-|
		000000b0  68 61 73 68 12 0a 36 36  38 64 36 62 66 39 62 63  |hash..668d6bf9bc|
		000000c0  6a 53 0a 0a 52 65 70 6c  69 63 61 53 65 74 1a 12  |jS..ReplicaSet. [truncated 25038 chars]
	 >
	I0317 12:09:38.150563    9924 type.go:168] "Request Body" body=""
	I0317 12:09:38.150563    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100
	I0317 12:09:38.150563    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:38.150563    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:38.150563    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:38.154432    9924 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0317 12:09:38.154432    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:38.154432    9924 round_trippers.go:587]     Audit-Id: be389614-4688-4674-9125-eb4a4f25c162
	I0317 12:09:38.154432    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:38.154432    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:38.154432    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:38.154432    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:38.154432    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:38 GMT
	I0317 12:09:38.155626    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 c2 21 0a fc 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..!.....multinod|
		00000020  65 2d 37 38 31 31 30 30  12 00 1a 00 22 00 2a 24  |e-781100....".*$|
		00000030  61 61 65 38 30 62 63 35  2d 34 33 30 37 2d 34 31  |aae80bc5-4307-41|
		00000040  31 37 2d 39 37 35 37 2d  35 32 62 61 38 31 65 64  |17-9757-52ba81ed|
		00000050  65 37 33 39 32 03 34 32  36 38 00 42 08 08 e2 a1  |e7392.4268.B....|
		00000060  e0 be 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20299 chars]
	 >
	I0317 12:09:38.639876    9924 type.go:168] "Request Body" body=""
	I0317 12:09:38.640466    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-b8445
	I0317 12:09:38.640466    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:38.640466    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:38.640466    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:38.648619    9924 round_trippers.go:581] Response Status: 200 OK in 8 milliseconds
	I0317 12:09:38.648702    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:38.648742    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:38.648742    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:38 GMT
	I0317 12:09:38.648742    9924 round_trippers.go:587]     Audit-Id: f2717add-0a9f-44a8-aab4-1802c77af7ba
	I0317 12:09:38.648742    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:38.648742    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:38.648775    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:38.649490    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  82 29 0a e8 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 62 38 34 34 35 12  |68d6bf9bc-b8445.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 31 65 66  61 30 64 62 30 2d 31 33  |m".*$1efa0db0-13|
		00000060  36 61 2d 34 34 30 35 2d  38 35 65 31 2d 34 64 32  |6a-4405-85e1-4d2|
		00000070  61 62 63 38 39 62 36 61  31 32 03 34 34 33 38 00  |abc89b6a12.4438.|
		00000080  42 08 08 ea a1 e0 be 06  10 00 5a 13 0a 07 6b 38  |B.........Z...k8|
		00000090  73 2d 61 70 70 12 08 6b  75 62 65 2d 64 6e 73 5a  |s-app..kube-dnsZ|
		000000a0  1f 0a 11 70 6f 64 2d 74  65 6d 70 6c 61 74 65 2d  |...pod-template-|
		000000b0  68 61 73 68 12 0a 36 36  38 64 36 62 66 39 62 63  |hash..668d6bf9bc|
		000000c0  6a 53 0a 0a 52 65 70 6c  69 63 61 53 65 74 1a 12  |jS..ReplicaSet. [truncated 25038 chars]
	 >
	I0317 12:09:38.649490    9924 type.go:168] "Request Body" body=""
	I0317 12:09:38.649490    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100
	I0317 12:09:38.649490    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:38.649490    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:38.649490    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:38.653935    9924 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0317 12:09:38.653963    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:38.653963    9924 round_trippers.go:587]     Audit-Id: 89152412-2baf-4918-90d8-5e3e349da180
	I0317 12:09:38.653963    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:38.653963    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:38.653963    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:38.653963    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:38.653963    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:38 GMT
	I0317 12:09:38.654367    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 c2 21 0a fc 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..!.....multinod|
		00000020  65 2d 37 38 31 31 30 30  12 00 1a 00 22 00 2a 24  |e-781100....".*$|
		00000030  61 61 65 38 30 62 63 35  2d 34 33 30 37 2d 34 31  |aae80bc5-4307-41|
		00000040  31 37 2d 39 37 35 37 2d  35 32 62 61 38 31 65 64  |17-9757-52ba81ed|
		00000050  65 37 33 39 32 03 34 32  36 38 00 42 08 08 e2 a1  |e7392.4268.B....|
		00000060  e0 be 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20299 chars]
	 >
	I0317 12:09:38.654367    9924 pod_ready.go:103] pod "coredns-668d6bf9bc-b8445" in "kube-system" namespace has status "Ready":"False"
	I0317 12:09:39.140337    9924 type.go:168] "Request Body" body=""
	I0317 12:09:39.140740    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-b8445
	I0317 12:09:39.140740    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:39.140740    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:39.140740    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:39.144662    9924 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0317 12:09:39.144662    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:39.144851    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:39.144851    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:39 GMT
	I0317 12:09:39.144851    9924 round_trippers.go:587]     Audit-Id: 11f7d4d9-85b6-4615-bc3f-6f1c47382801
	I0317 12:09:39.144851    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:39.144851    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:39.144851    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:39.145276    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  d0 27 0a ae 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.'.....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 62 38 34 34 35 12  |68d6bf9bc-b8445.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 31 65 66  61 30 64 62 30 2d 31 33  |m".*$1efa0db0-13|
		00000060  36 61 2d 34 34 30 35 2d  38 35 65 31 2d 34 64 32  |6a-4405-85e1-4d2|
		00000070  61 62 63 38 39 62 36 61  31 32 03 34 34 38 38 00  |abc89b6a12.4488.|
		00000080  42 08 08 ea a1 e0 be 06  10 00 5a 13 0a 07 6b 38  |B.........Z...k8|
		00000090  73 2d 61 70 70 12 08 6b  75 62 65 2d 64 6e 73 5a  |s-app..kube-dnsZ|
		000000a0  1f 0a 11 70 6f 64 2d 74  65 6d 70 6c 61 74 65 2d  |...pod-template-|
		000000b0  68 61 73 68 12 0a 36 36  38 64 36 62 66 39 62 63  |hash..668d6bf9bc|
		000000c0  6a 53 0a 0a 52 65 70 6c  69 63 61 53 65 74 1a 12  |jS..ReplicaSet. [truncated 24167 chars]
	 >
	I0317 12:09:39.145328    9924 type.go:168] "Request Body" body=""
	I0317 12:09:39.145328    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100
	I0317 12:09:39.145328    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:39.145328    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:39.145328    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:39.147936    9924 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0317 12:09:39.148801    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:39.148801    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:39 GMT
	I0317 12:09:39.148801    9924 round_trippers.go:587]     Audit-Id: d9440655-df89-47e1-8050-fad2d911b057
	I0317 12:09:39.148801    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:39.148801    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:39.148801    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:39.148801    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:39.149125    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 c2 21 0a fc 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..!.....multinod|
		00000020  65 2d 37 38 31 31 30 30  12 00 1a 00 22 00 2a 24  |e-781100....".*$|
		00000030  61 61 65 38 30 62 63 35  2d 34 33 30 37 2d 34 31  |aae80bc5-4307-41|
		00000040  31 37 2d 39 37 35 37 2d  35 32 62 61 38 31 65 64  |17-9757-52ba81ed|
		00000050  65 37 33 39 32 03 34 32  36 38 00 42 08 08 e2 a1  |e7392.4268.B....|
		00000060  e0 be 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20299 chars]
	 >
	I0317 12:09:39.149284    9924 pod_ready.go:93] pod "coredns-668d6bf9bc-b8445" in "kube-system" namespace has status "Ready":"True"
	I0317 12:09:39.149389    9924 pod_ready.go:82] duration metric: took 2.5096783s for pod "coredns-668d6bf9bc-b8445" in "kube-system" namespace to be "Ready" ...
	I0317 12:09:39.149389    9924 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-781100" in "kube-system" namespace to be "Ready" ...
	I0317 12:09:39.149577    9924 type.go:168] "Request Body" body=""
	I0317 12:09:39.149601    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-781100
	I0317 12:09:39.149601    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:39.149601    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:39.149601    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:39.152354    9924 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0317 12:09:39.152354    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:39.152354    9924 round_trippers.go:587]     Audit-Id: 571e4b2b-4c1b-4561-8701-2f32fb681daa
	I0317 12:09:39.152354    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:39.152354    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:39.152354    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:39.152354    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:39.152354    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:39 GMT
	I0317 12:09:39.153021    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  a0 2b 0a 9c 1a 0a 15 65  74 63 64 2d 6d 75 6c 74  |.+.....etcd-mult|
		00000020  69 6e 6f 64 65 2d 37 38  31 31 30 30 12 00 1a 0b  |inode-781100....|
		00000030  6b 75 62 65 2d 73 79 73  74 65 6d 22 00 2a 24 63  |kube-system".*$c|
		00000040  32 30 63 39 31 61 33 2d  65 62 34 66 2d 34 36 62  |20c91a3-eb4f-46b|
		00000050  66 2d 38 38 61 38 2d 32  65 62 62 66 62 38 61 64  |f-88a8-2ebbfb8ad|
		00000060  35 33 64 32 03 33 39 31  38 00 42 08 08 e5 a1 e0  |53d2.3918.B.....|
		00000070  be 06 10 00 5a 11 0a 09  63 6f 6d 70 6f 6e 65 6e  |....Z...componen|
		00000080  74 12 04 65 74 63 64 5a  15 0a 04 74 69 65 72 12  |t..etcdZ...tier.|
		00000090  0d 63 6f 6e 74 72 6f 6c  2d 70 6c 61 6e 65 62 4e  |.control-planebN|
		000000a0  0a 30 6b 75 62 65 61 64  6d 2e 6b 75 62 65 72 6e  |.0kubeadm.kubern|
		000000b0  65 74 65 73 2e 69 6f 2f  65 74 63 64 2e 61 64 76  |etes.io/etcd.adv|
		000000c0  65 72 74 69 73 65 2d 63  6c 69 65 6e 74 2d 75 72  |ertise-client-u [truncated 26458 chars]
	 >
	I0317 12:09:39.153179    9924 type.go:168] "Request Body" body=""
	I0317 12:09:39.153179    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100
	I0317 12:09:39.153179    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:39.153179    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:39.153179    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:39.156683    9924 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0317 12:09:39.156683    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:39.156683    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:39 GMT
	I0317 12:09:39.156683    9924 round_trippers.go:587]     Audit-Id: 64a796ae-663c-4a43-8de8-a230d8f2b6e9
	I0317 12:09:39.156683    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:39.156683    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:39.156683    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:39.156683    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:39.157670    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 c2 21 0a fc 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..!.....multinod|
		00000020  65 2d 37 38 31 31 30 30  12 00 1a 00 22 00 2a 24  |e-781100....".*$|
		00000030  61 61 65 38 30 62 63 35  2d 34 33 30 37 2d 34 31  |aae80bc5-4307-41|
		00000040  31 37 2d 39 37 35 37 2d  35 32 62 61 38 31 65 64  |17-9757-52ba81ed|
		00000050  65 37 33 39 32 03 34 32  36 38 00 42 08 08 e2 a1  |e7392.4268.B....|
		00000060  e0 be 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20299 chars]
	 >
	I0317 12:09:39.157951    9924 pod_ready.go:93] pod "etcd-multinode-781100" in "kube-system" namespace has status "Ready":"True"
	I0317 12:09:39.157994    9924 pod_ready.go:82] duration metric: took 8.5617ms for pod "etcd-multinode-781100" in "kube-system" namespace to be "Ready" ...
	I0317 12:09:39.158061    9924 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-781100" in "kube-system" namespace to be "Ready" ...
	I0317 12:09:39.158104    9924 type.go:168] "Request Body" body=""
	I0317 12:09:39.158104    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-781100
	I0317 12:09:39.158104    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:39.158104    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:39.158104    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:39.160974    9924 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0317 12:09:39.160974    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:39.160974    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:39.160974    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:39 GMT
	I0317 12:09:39.160974    9924 round_trippers.go:587]     Audit-Id: 397de26b-5e9d-459a-8c47-e690e2652e4c
	I0317 12:09:39.160974    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:39.160974    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:39.160974    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:39.160974    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  85 34 0a ac 1c 0a 1f 6b  75 62 65 2d 61 70 69 73  |.4.....kube-apis|
		00000020  65 72 76 65 72 2d 6d 75  6c 74 69 6e 6f 64 65 2d  |erver-multinode-|
		00000030  37 38 31 31 30 30 12 00  1a 0b 6b 75 62 65 2d 73  |781100....kube-s|
		00000040  79 73 74 65 6d 22 00 2a  24 33 39 33 35 64 35 64  |ystem".*$3935d5d|
		00000050  31 2d 62 36 38 31 2d 34  39 65 63 2d 39 38 30 31  |1-b681-49ec-9801|
		00000060  2d 66 39 34 30 66 33 34  38 32 30 65 31 32 03 33  |-f940f34820e12.3|
		00000070  38 36 38 00 42 08 08 e5  a1 e0 be 06 10 00 5a 1b  |868.B.........Z.|
		00000080  0a 09 63 6f 6d 70 6f 6e  65 6e 74 12 0e 6b 75 62  |..component..kub|
		00000090  65 2d 61 70 69 73 65 72  76 65 72 5a 15 0a 04 74  |e-apiserverZ...t|
		000000a0  69 65 72 12 0d 63 6f 6e  74 72 6f 6c 2d 70 6c 61  |ier..control-pla|
		000000b0  6e 65 62 55 0a 3f 6b 75  62 65 61 64 6d 2e 6b 75  |nebU.?kubeadm.ku|
		000000c0  62 65 72 6e 65 74 65 73  2e 69 6f 2f 6b 75 62 65  |bernetes.io/kub [truncated 31993 chars]
	 >
	I0317 12:09:39.161751    9924 type.go:168] "Request Body" body=""
	I0317 12:09:39.161751    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100
	I0317 12:09:39.161751    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:39.161751    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:39.161751    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:39.163931    9924 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0317 12:09:39.163931    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:39.163931    9924 round_trippers.go:587]     Audit-Id: ebad3841-27a8-4bc3-940d-7d424b11bfa5
	I0317 12:09:39.163931    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:39.163931    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:39.163931    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:39.163931    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:39.163931    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:39 GMT
	I0317 12:09:39.164953    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 c2 21 0a fc 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..!.....multinod|
		00000020  65 2d 37 38 31 31 30 30  12 00 1a 00 22 00 2a 24  |e-781100....".*$|
		00000030  61 61 65 38 30 62 63 35  2d 34 33 30 37 2d 34 31  |aae80bc5-4307-41|
		00000040  31 37 2d 39 37 35 37 2d  35 32 62 61 38 31 65 64  |17-9757-52ba81ed|
		00000050  65 37 33 39 32 03 34 32  36 38 00 42 08 08 e2 a1  |e7392.4268.B....|
		00000060  e0 be 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20299 chars]
	 >
	I0317 12:09:39.164953    9924 pod_ready.go:93] pod "kube-apiserver-multinode-781100" in "kube-system" namespace has status "Ready":"True"
	I0317 12:09:39.164953    9924 pod_ready.go:82] duration metric: took 6.8738ms for pod "kube-apiserver-multinode-781100" in "kube-system" namespace to be "Ready" ...
	I0317 12:09:39.164953    9924 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-781100" in "kube-system" namespace to be "Ready" ...
	I0317 12:09:39.164953    9924 type.go:168] "Request Body" body=""
	I0317 12:09:39.164953    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-781100
	I0317 12:09:39.164953    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:39.164953    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:39.164953    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:39.167709    9924 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0317 12:09:39.167970    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:39.167970    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:39.167970    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:39.167970    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:39.167970    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:39 GMT
	I0317 12:09:39.168086    9924 round_trippers.go:587]     Audit-Id: ec6b12eb-5d0b-4420-b237-ebf2474514a2
	I0317 12:09:39.168086    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:39.168498    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  eb 30 0a 99 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.0....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 37 38 31 31 30 30 12  |ultinode-781100.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 36 31 38 38 65 64  30 66 2d 61 32 35 32 2d  |*$6188ed0f-a252-|
		00000060  34 61 35 39 2d 39 62 61  34 2d 32 37 62 30 32 33  |4a59-9ba4-27b023|
		00000070  37 34 63 34 63 31 32 03  34 30 36 38 00 42 08 08  |74c4c12.4068.B..|
		00000080  e5 a1 e0 be 06 10 00 5a  24 0a 09 63 6f 6d 70 6f  |.......Z$..compo|
		00000090  6e 65 6e 74 12 17 6b 75  62 65 2d 63 6f 6e 74 72  |nent..kube-contr|
		000000a0  6f 6c 6c 65 72 2d 6d 61  6e 61 67 65 72 5a 15 0a  |oller-managerZ..|
		000000b0  04 74 69 65 72 12 0d 63  6f 6e 74 72 6f 6c 2d 70  |.tier..control-p|
		000000c0  6c 61 6e 65 62 3d 0a 19  6b 75 62 65 72 6e 65 74  |laneb=..kuberne [truncated 30008 chars]
	 >
	I0317 12:09:39.168566    9924 type.go:168] "Request Body" body=""
	I0317 12:09:39.168566    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100
	I0317 12:09:39.168566    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:39.168566    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:39.168566    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:39.171145    9924 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0317 12:09:39.171145    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:39.171145    9924 round_trippers.go:587]     Audit-Id: 001dec9d-d4e3-4e74-b5b2-13a2b05c860c
	I0317 12:09:39.171145    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:39.171145    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:39.171145    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:39.171145    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:39.171145    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:39 GMT
	I0317 12:09:39.171145    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 c2 21 0a fc 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..!.....multinod|
		00000020  65 2d 37 38 31 31 30 30  12 00 1a 00 22 00 2a 24  |e-781100....".*$|
		00000030  61 61 65 38 30 62 63 35  2d 34 33 30 37 2d 34 31  |aae80bc5-4307-41|
		00000040  31 37 2d 39 37 35 37 2d  35 32 62 61 38 31 65 64  |17-9757-52ba81ed|
		00000050  65 37 33 39 32 03 34 32  36 38 00 42 08 08 e2 a1  |e7392.4268.B....|
		00000060  e0 be 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20299 chars]
	 >
	I0317 12:09:39.171145    9924 pod_ready.go:93] pod "kube-controller-manager-multinode-781100" in "kube-system" namespace has status "Ready":"True"
	I0317 12:09:39.171145    9924 pod_ready.go:82] duration metric: took 6.1918ms for pod "kube-controller-manager-multinode-781100" in "kube-system" namespace to be "Ready" ...
	I0317 12:09:39.171145    9924 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-29tvk" in "kube-system" namespace to be "Ready" ...
	I0317 12:09:39.171145    9924 type.go:168] "Request Body" body=""
	I0317 12:09:39.171145    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-29tvk
	I0317 12:09:39.171145    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:39.171145    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:39.171145    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:39.175174    9924 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 12:09:39.175661    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:39.175661    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:39.175661    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:39.175661    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:39.175661    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:39.175661    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:39 GMT
	I0317 12:09:39.175661    9924 round_trippers.go:587]     Audit-Id: 4e6eecbe-6341-4014-8ba0-f325ac30269b
	I0317 12:09:39.175953    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  9d 25 0a bf 14 0a 10 6b  75 62 65 2d 70 72 6f 78  |.%.....kube-prox|
		00000020  79 2d 32 39 74 76 6b 12  0b 6b 75 62 65 2d 70 72  |y-29tvk..kube-pr|
		00000030  6f 78 79 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |oxy-..kube-syste|
		00000040  6d 22 00 2a 24 37 65 66  65 36 63 33 32 2d 30 62  |m".*$7efe6c32-0b|
		00000050  39 66 2d 34 64 38 61 2d  38 64 30 38 2d 61 33 39  |9f-4d8a-8d08-a39|
		00000060  39 33 62 36 64 63 35 62  35 32 03 34 30 31 38 00  |93b6dc5b52.4018.|
		00000070  42 08 08 ea a1 e0 be 06  10 00 5a 26 0a 18 63 6f  |B.........Z&..co|
		00000080  6e 74 72 6f 6c 6c 65 72  2d 72 65 76 69 73 69 6f  |ntroller-revisio|
		00000090  6e 2d 68 61 73 68 12 0a  37 62 62 38 34 63 34 39  |n-hash..7bb84c49|
		000000a0  38 34 5a 15 0a 07 6b 38  73 2d 61 70 70 12 0a 6b  |84Z...k8s-app..k|
		000000b0  75 62 65 2d 70 72 6f 78  79 5a 1c 0a 17 70 6f 64  |ube-proxyZ...pod|
		000000c0  2d 74 65 6d 70 6c 61 74  65 2d 67 65 6e 65 72 61  |-template-gener [truncated 22663 chars]
	 >
	I0317 12:09:39.176223    9924 type.go:168] "Request Body" body=""
	I0317 12:09:39.176312    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100
	I0317 12:09:39.176312    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:39.176312    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:39.176312    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:39.179817    9924 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0317 12:09:39.179817    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:39.179817    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:39.179817    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:39.179817    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:39 GMT
	I0317 12:09:39.179817    9924 round_trippers.go:587]     Audit-Id: a7a88b09-8c04-4f5e-ac78-b73a25c183fa
	I0317 12:09:39.179817    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:39.179817    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:39.179817    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 c2 21 0a fc 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..!.....multinod|
		00000020  65 2d 37 38 31 31 30 30  12 00 1a 00 22 00 2a 24  |e-781100....".*$|
		00000030  61 61 65 38 30 62 63 35  2d 34 33 30 37 2d 34 31  |aae80bc5-4307-41|
		00000040  31 37 2d 39 37 35 37 2d  35 32 62 61 38 31 65 64  |17-9757-52ba81ed|
		00000050  65 37 33 39 32 03 34 32  36 38 00 42 08 08 e2 a1  |e7392.4268.B....|
		00000060  e0 be 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20299 chars]
	 >
	I0317 12:09:39.180370    9924 pod_ready.go:93] pod "kube-proxy-29tvk" in "kube-system" namespace has status "Ready":"True"
	I0317 12:09:39.180370    9924 pod_ready.go:82] duration metric: took 9.2247ms for pod "kube-proxy-29tvk" in "kube-system" namespace to be "Ready" ...
	I0317 12:09:39.180370    9924 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-781100" in "kube-system" namespace to be "Ready" ...
	I0317 12:09:39.180574    9924 type.go:168] "Request Body" body=""
	I0317 12:09:39.341345    9924 request.go:661] Waited for 160.769ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.124:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-781100
	I0317 12:09:39.341345    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-781100
	I0317 12:09:39.341345    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:39.341836    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:39.341836    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:39.345505    9924 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0317 12:09:39.345505    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:39.345505    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:39.345505    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:39 GMT
	I0317 12:09:39.345505    9924 round_trippers.go:587]     Audit-Id: 0deecd8e-2105-4e27-80da-6982a1217874
	I0317 12:09:39.345505    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:39.345615    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:39.345615    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:39.345847    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  f6 22 0a 81 18 0a 1f 6b  75 62 65 2d 73 63 68 65  |.".....kube-sche|
		00000020  64 75 6c 65 72 2d 6d 75  6c 74 69 6e 6f 64 65 2d  |duler-multinode-|
		00000030  37 38 31 31 30 30 12 00  1a 0b 6b 75 62 65 2d 73  |781100....kube-s|
		00000040  79 73 74 65 6d 22 00 2a  24 32 30 37 33 66 65 62  |ystem".*$2073feb|
		00000050  39 2d 39 35 63 38 2d 34  30 65 34 2d 39 38 62 39  |9-95c8-40e4-98b9|
		00000060  2d 31 39 37 35 39 61 38  64 62 36 65 39 32 03 33  |-19759a8db6e92.3|
		00000070  36 31 38 00 42 08 08 e5  a1 e0 be 06 10 00 5a 1b  |618.B.........Z.|
		00000080  0a 09 63 6f 6d 70 6f 6e  65 6e 74 12 0e 6b 75 62  |..component..kub|
		00000090  65 2d 73 63 68 65 64 75  6c 65 72 5a 15 0a 04 74  |e-schedulerZ...t|
		000000a0  69 65 72 12 0d 63 6f 6e  74 72 6f 6c 2d 70 6c 61  |ier..control-pla|
		000000b0  6e 65 62 3d 0a 19 6b 75  62 65 72 6e 65 74 65 73  |neb=..kubernetes|
		000000c0  2e 69 6f 2f 63 6f 6e 66  69 67 2e 68 61 73 68 12  |.io/config.hash [truncated 21171 chars]
	 >
	I0317 12:09:39.346131    9924 type.go:168] "Request Body" body=""
	I0317 12:09:39.541360    9924 request.go:661] Waited for 195.2264ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.124:8443/api/v1/nodes/multinode-781100
	I0317 12:09:39.541841    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100
	I0317 12:09:39.541884    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:39.541884    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:39.541884    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:39.545978    9924 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 12:09:39.546055    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:39.546055    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:39.546055    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:39.546055    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:39.546055    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:39 GMT
	I0317 12:09:39.546055    9924 round_trippers.go:587]     Audit-Id: fe140e4b-4cc4-465f-a4bb-39fff10f32c5
	I0317 12:09:39.546055    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:39.546359    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 c2 21 0a fc 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..!.....multinod|
		00000020  65 2d 37 38 31 31 30 30  12 00 1a 00 22 00 2a 24  |e-781100....".*$|
		00000030  61 61 65 38 30 62 63 35  2d 34 33 30 37 2d 34 31  |aae80bc5-4307-41|
		00000040  31 37 2d 39 37 35 37 2d  35 32 62 61 38 31 65 64  |17-9757-52ba81ed|
		00000050  65 37 33 39 32 03 34 32  36 38 00 42 08 08 e2 a1  |e7392.4268.B....|
		00000060  e0 be 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20299 chars]
	 >
	I0317 12:09:39.546512    9924 pod_ready.go:93] pod "kube-scheduler-multinode-781100" in "kube-system" namespace has status "Ready":"True"
	I0317 12:09:39.546596    9924 pod_ready.go:82] duration metric: took 366.2221ms for pod "kube-scheduler-multinode-781100" in "kube-system" namespace to be "Ready" ...
	I0317 12:09:39.546596    9924 pod_ready.go:39] duration metric: took 2.9267005s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 12:09:39.546596    9924 api_server.go:52] waiting for apiserver process to appear ...
	I0317 12:09:39.558890    9924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 12:09:39.605628    9924 command_runner.go:130] > 2108
	I0317 12:09:39.605628    9924 api_server.go:72] duration metric: took 24.7726553s to wait for apiserver process to appear ...
	I0317 12:09:39.605628    9924 api_server.go:88] waiting for apiserver healthz status ...
	I0317 12:09:39.605628    9924 api_server.go:253] Checking apiserver healthz at https://172.25.16.124:8443/healthz ...
	I0317 12:09:39.618049    9924 api_server.go:279] https://172.25.16.124:8443/healthz returned 200:
	ok
	I0317 12:09:39.618129    9924 discovery_client.go:658] "Request Body" body=""
	I0317 12:09:39.618129    9924 round_trippers.go:470] GET https://172.25.16.124:8443/version
	I0317 12:09:39.618129    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:39.618129    9924 round_trippers.go:480]     Accept: application/json, */*
	I0317 12:09:39.618129    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:39.621117    9924 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0317 12:09:39.621195    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:39.621195    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:39 GMT
	I0317 12:09:39.621195    9924 round_trippers.go:587]     Audit-Id: f72042dc-47f5-4111-b3b8-c5a43aca00d6
	I0317 12:09:39.621195    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:39.621195    9924 round_trippers.go:587]     Content-Type: application/json
	I0317 12:09:39.621195    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:39.621300    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:39.621300    9924 round_trippers.go:587]     Content-Length: 263
	I0317 12:09:39.621333    9924 discovery_client.go:658] "Response Body" body=<
		{
		  "major": "1",
		  "minor": "32",
		  "gitVersion": "v1.32.2",
		  "gitCommit": "67a30c0adcf52bd3f56ff0893ce19966be12991f",
		  "gitTreeState": "clean",
		  "buildDate": "2025-02-12T21:19:47Z",
		  "goVersion": "go1.23.6",
		  "compiler": "gc",
		  "platform": "linux/amd64"
		}
	 >
	I0317 12:09:39.621522    9924 api_server.go:141] control plane version: v1.32.2
	I0317 12:09:39.621564    9924 api_server.go:131] duration metric: took 15.9364ms to wait for apiserver health ...
	I0317 12:09:39.621564    9924 system_pods.go:43] waiting for kube-system pods to appear ...
	I0317 12:09:39.621711    9924 type.go:204] "Request Body" body=""
	I0317 12:09:39.740750    9924 request.go:661] Waited for 119.0371ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.124:8443/api/v1/namespaces/kube-system/pods
	I0317 12:09:39.741462    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/namespaces/kube-system/pods
	I0317 12:09:39.741462    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:39.741530    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:39.741530    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:39.744665    9924 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0317 12:09:39.745586    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:39.745586    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:39.745586    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:39.745586    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:39.745586    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:39.745586    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:39 GMT
	I0317 12:09:39.745586    9924 round_trippers.go:587]     Audit-Id: b97684e7-b738-4beb-a6f7-11a55472b03e
	I0317 12:09:39.748032    9924 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0d 0a 02  76 31 12 07 50 6f 64 4c  |k8s.....v1..PodL|
		00000010  69 73 74 12 fa c6 02 0a  09 0a 00 12 03 34 35 33  |ist..........453|
		00000020  1a 00 12 d0 27 0a ae 19  0a 18 63 6f 72 65 64 6e  |....'.....coredn|
		00000030  73 2d 36 36 38 64 36 62  66 39 62 63 2d 62 38 34  |s-668d6bf9bc-b84|
		00000040  34 35 12 13 63 6f 72 65  64 6e 73 2d 36 36 38 64  |45..coredns-668d|
		00000050  36 62 66 39 62 63 2d 1a  0b 6b 75 62 65 2d 73 79  |6bf9bc-..kube-sy|
		00000060  73 74 65 6d 22 00 2a 24  31 65 66 61 30 64 62 30  |stem".*$1efa0db0|
		00000070  2d 31 33 36 61 2d 34 34  30 35 2d 38 35 65 31 2d  |-136a-4405-85e1-|
		00000080  34 64 32 61 62 63 38 39  62 36 61 31 32 03 34 34  |4d2abc89b6a12.44|
		00000090  38 38 00 42 08 08 ea a1  e0 be 06 10 00 5a 13 0a  |88.B.........Z..|
		000000a0  07 6b 38 73 2d 61 70 70  12 08 6b 75 62 65 2d 64  |.k8s-app..kube-d|
		000000b0  6e 73 5a 1f 0a 11 70 6f  64 2d 74 65 6d 70 6c 61  |nsZ...pod-templa|
		000000c0  74 65 2d 68 61 73 68 12  0a 36 36 38 64 36 62 66  |te-hash..668d6b [truncated 205787 chars]
	 >
	I0317 12:09:39.748598    9924 system_pods.go:59] 8 kube-system pods found
	I0317 12:09:39.748815    9924 system_pods.go:61] "coredns-668d6bf9bc-b8445" [1efa0db0-136a-4405-85e1-4d2abc89b6a1] Running
	I0317 12:09:39.748815    9924 system_pods.go:61] "etcd-multinode-781100" [c20c91a3-eb4f-46bf-88a8-2ebbfb8ad53d] Running
	I0317 12:09:39.748815    9924 system_pods.go:61] "kindnet-8pd8m" [8cb37df0-2af4-439e-b5fb-03e1fea13790] Running
	I0317 12:09:39.748815    9924 system_pods.go:61] "kube-apiserver-multinode-781100" [3935d5d1-b681-49ec-9801-f940f34820e1] Running
	I0317 12:09:39.748815    9924 system_pods.go:61] "kube-controller-manager-multinode-781100" [6188ed0f-a252-4a59-9ba4-27b02374c4c1] Running
	I0317 12:09:39.748815    9924 system_pods.go:61] "kube-proxy-29tvk" [7efe6c32-0b9f-4d8a-8d08-a3993b6dc5b5] Running
	I0317 12:09:39.748815    9924 system_pods.go:61] "kube-scheduler-multinode-781100" [2073feb9-95c8-40e4-98b9-19759a8db6e9] Running
	I0317 12:09:39.748815    9924 system_pods.go:61] "storage-provisioner" [5cca6e8c-142b-4780-b05b-dd5a84bd4220] Running
	I0317 12:09:39.748815    9924 system_pods.go:74] duration metric: took 127.2207ms to wait for pod list to return data ...
	I0317 12:09:39.748887    9924 default_sa.go:34] waiting for default service account to be created ...
	I0317 12:09:39.748957    9924 type.go:204] "Request Body" body=""
	I0317 12:09:39.941195    9924 request.go:661] Waited for 192.2362ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.124:8443/api/v1/namespaces/default/serviceaccounts
	I0317 12:09:39.941838    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/namespaces/default/serviceaccounts
	I0317 12:09:39.941897    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:39.941897    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:39.941897    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:39.945949    9924 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 12:09:39.945949    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:39.946095    9924 round_trippers.go:587]     Content-Length: 128
	I0317 12:09:39.946095    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:39 GMT
	I0317 12:09:39.946251    9924 round_trippers.go:587]     Audit-Id: 245c18f1-f1ca-412b-92cf-87f4dec0fe45
	I0317 12:09:39.946251    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:39.946251    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:39.946251    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:39.946251    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:39.946332    9924 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 18 0a 02  76 31 12 12 53 65 72 76  |k8s.....v1..Serv|
		00000010  69 63 65 41 63 63 6f 75  6e 74 4c 69 73 74 12 5c  |iceAccountList.\|
		00000020  0a 09 0a 00 12 03 34 35  33 1a 00 12 4f 0a 4d 0a  |......453...O.M.|
		00000030  07 64 65 66 61 75 6c 74  12 00 1a 07 64 65 66 61  |.default....defa|
		00000040  75 6c 74 22 00 2a 24 63  62 61 31 33 61 36 30 2d  |ult".*$cba13a60-|
		00000050  33 62 63 61 2d 34 34 37  31 2d 61 64 62 36 2d 35  |3bca-4471-adb6-5|
		00000060  66 36 30 37 33 39 38 36  32 61 31 32 03 33 34 34  |f60739862a12.344|
		00000070  38 00 42 08 08 ea a1 e0  be 06 10 00 1a 00 22 00  |8.B...........".|
	 >
	I0317 12:09:39.946524    9924 default_sa.go:45] found service account: "default"
	I0317 12:09:39.946524    9924 default_sa.go:55] duration metric: took 197.6348ms for default service account to be created ...
	I0317 12:09:39.946577    9924 system_pods.go:116] waiting for k8s-apps to be running ...
	I0317 12:09:39.946666    9924 type.go:204] "Request Body" body=""
	I0317 12:09:40.141082    9924 request.go:661] Waited for 194.4133ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.124:8443/api/v1/namespaces/kube-system/pods
	I0317 12:09:40.141654    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/namespaces/kube-system/pods
	I0317 12:09:40.141726    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:40.141726    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:40.141726    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:40.147059    9924 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 12:09:40.147059    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:40.147059    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:40 GMT
	I0317 12:09:40.147059    9924 round_trippers.go:587]     Audit-Id: 6f486540-c795-49c7-8711-3313f64a55a4
	I0317 12:09:40.147059    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:40.147059    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:40.147059    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:40.147059    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:40.149073    9924 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0d 0a 02  76 31 12 07 50 6f 64 4c  |k8s.....v1..PodL|
		00000010  69 73 74 12 fa c6 02 0a  09 0a 00 12 03 34 35 34  |ist..........454|
		00000020  1a 00 12 d0 27 0a ae 19  0a 18 63 6f 72 65 64 6e  |....'.....coredn|
		00000030  73 2d 36 36 38 64 36 62  66 39 62 63 2d 62 38 34  |s-668d6bf9bc-b84|
		00000040  34 35 12 13 63 6f 72 65  64 6e 73 2d 36 36 38 64  |45..coredns-668d|
		00000050  36 62 66 39 62 63 2d 1a  0b 6b 75 62 65 2d 73 79  |6bf9bc-..kube-sy|
		00000060  73 74 65 6d 22 00 2a 24  31 65 66 61 30 64 62 30  |stem".*$1efa0db0|
		00000070  2d 31 33 36 61 2d 34 34  30 35 2d 38 35 65 31 2d  |-136a-4405-85e1-|
		00000080  34 64 32 61 62 63 38 39  62 36 61 31 32 03 34 34  |4d2abc89b6a12.44|
		00000090  38 38 00 42 08 08 ea a1  e0 be 06 10 00 5a 13 0a  |88.B.........Z..|
		000000a0  07 6b 38 73 2d 61 70 70  12 08 6b 75 62 65 2d 64  |.k8s-app..kube-d|
		000000b0  6e 73 5a 1f 0a 11 70 6f  64 2d 74 65 6d 70 6c 61  |nsZ...pod-templa|
		000000c0  74 65 2d 68 61 73 68 12  0a 36 36 38 64 36 62 66  |te-hash..668d6b [truncated 205787 chars]
	 >
	I0317 12:09:40.150265    9924 system_pods.go:86] 8 kube-system pods found
	I0317 12:09:40.150265    9924 system_pods.go:89] "coredns-668d6bf9bc-b8445" [1efa0db0-136a-4405-85e1-4d2abc89b6a1] Running
	I0317 12:09:40.150339    9924 system_pods.go:89] "etcd-multinode-781100" [c20c91a3-eb4f-46bf-88a8-2ebbfb8ad53d] Running
	I0317 12:09:40.150339    9924 system_pods.go:89] "kindnet-8pd8m" [8cb37df0-2af4-439e-b5fb-03e1fea13790] Running
	I0317 12:09:40.150339    9924 system_pods.go:89] "kube-apiserver-multinode-781100" [3935d5d1-b681-49ec-9801-f940f34820e1] Running
	I0317 12:09:40.150339    9924 system_pods.go:89] "kube-controller-manager-multinode-781100" [6188ed0f-a252-4a59-9ba4-27b02374c4c1] Running
	I0317 12:09:40.150339    9924 system_pods.go:89] "kube-proxy-29tvk" [7efe6c32-0b9f-4d8a-8d08-a3993b6dc5b5] Running
	I0317 12:09:40.150339    9924 system_pods.go:89] "kube-scheduler-multinode-781100" [2073feb9-95c8-40e4-98b9-19759a8db6e9] Running
	I0317 12:09:40.150339    9924 system_pods.go:89] "storage-provisioner" [5cca6e8c-142b-4780-b05b-dd5a84bd4220] Running
	I0317 12:09:40.150339    9924 system_pods.go:126] duration metric: took 203.7451ms to wait for k8s-apps to be running ...
	I0317 12:09:40.150437    9924 system_svc.go:44] waiting for kubelet service to be running ....
	I0317 12:09:40.160674    9924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0317 12:09:40.187929    9924 system_svc.go:56] duration metric: took 37.4923ms WaitForService to wait for kubelet
	I0317 12:09:40.187929    9924 kubeadm.go:582] duration metric: took 25.3549508s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0317 12:09:40.187929    9924 node_conditions.go:102] verifying NodePressure condition ...
	I0317 12:09:40.187929    9924 type.go:204] "Request Body" body=""
	I0317 12:09:40.341285    9924 request.go:661] Waited for 153.3543ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.124:8443/api/v1/nodes
	I0317 12:09:40.341285    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes
	I0317 12:09:40.341894    9924 round_trippers.go:476] Request Headers:
	I0317 12:09:40.341894    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:09:40.341894    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:09:40.357628    9924 round_trippers.go:581] Response Status: 200 OK in 15 milliseconds
	I0317 12:09:40.357729    9924 round_trippers.go:584] Response Headers:
	I0317 12:09:40.357729    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:09:40 GMT
	I0317 12:09:40.357729    9924 round_trippers.go:587]     Audit-Id: 5d59c86f-3ea2-4003-b078-ad98cd53294f
	I0317 12:09:40.357729    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:09:40.357729    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:09:40.357729    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:09:40.357729    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:09:40.358042    9924 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0e 0a 02  76 31 12 08 4e 6f 64 65  |k8s.....v1..Node|
		00000010  4c 69 73 74 12 e5 22 0a  09 0a 00 12 03 34 35 35  |List.."......455|
		00000020  1a 00 12 d7 22 0a 8a 11  0a 10 6d 75 6c 74 69 6e  |....".....multin|
		00000030  6f 64 65 2d 37 38 31 31  30 30 12 00 1a 00 22 00  |ode-781100....".|
		00000040  2a 24 61 61 65 38 30 62  63 35 2d 34 33 30 37 2d  |*$aae80bc5-4307-|
		00000050  34 31 31 37 2d 39 37 35  37 2d 35 32 62 61 38 31  |4117-9757-52ba81|
		00000060  65 64 65 37 33 39 32 03  34 35 34 38 00 42 08 08  |ede7392.4548.B..|
		00000070  e2 a1 e0 be 06 10 00 5a  20 0a 17 62 65 74 61 2e  |.......Z ..beta.|
		00000080  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		00000090  63 68 12 05 61 6d 64 36  34 5a 1e 0a 15 62 65 74  |ch..amd64Z...bet|
		000000a0  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		000000b0  6f 73 12 05 6c 69 6e 75  78 5a 1b 0a 12 6b 75 62  |os..linuxZ...kub|
		000000c0  65 72 6e 65 74 65 73 2e  69 6f 2f 61 72 63 68 12  |ernetes.io/arch [truncated 21096 chars]
	 >
	I0317 12:09:40.358324    9924 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0317 12:09:40.358389    9924 node_conditions.go:123] node cpu capacity is 2
	I0317 12:09:40.358389    9924 node_conditions.go:105] duration metric: took 170.4578ms to run NodePressure ...
	I0317 12:09:40.358470    9924 start.go:241] waiting for startup goroutines ...
	I0317 12:09:40.358492    9924 start.go:246] waiting for cluster config update ...
	I0317 12:09:40.358492    9924 start.go:255] writing updated cluster config ...
	I0317 12:09:40.363779    9924 out.go:201] 
	I0317 12:09:40.367024    9924 config.go:182] Loaded profile config "ha-450500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0317 12:09:40.378583    9924 config.go:182] Loaded profile config "multinode-781100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0317 12:09:40.379407    9924 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-781100\config.json ...
	I0317 12:09:40.386109    9924 out.go:177] * Starting "multinode-781100-m02" worker node in "multinode-781100" cluster
	I0317 12:09:40.388593    9924 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0317 12:09:40.389152    9924 cache.go:56] Caching tarball of preloaded images
	I0317 12:09:40.389763    9924 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0317 12:09:40.389878    9924 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on docker
	I0317 12:09:40.389989    9924 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-781100\config.json ...
	I0317 12:09:40.398701    9924 start.go:360] acquireMachinesLock for multinode-781100-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0317 12:09:40.399713    9924 start.go:364] duration metric: took 1.0124ms to acquireMachinesLock for "multinode-781100-m02"
	I0317 12:09:40.399935    9924 start.go:93] Provisioning new machine with config: &{Name:multinode-781100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 Clus
terName:multinode-781100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.16.124 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0317 12:09:40.399983    9924 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0317 12:09:40.402282    9924 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0317 12:09:40.403061    9924 start.go:159] libmachine.API.Create for "multinode-781100" (driver="hyperv")
	I0317 12:09:40.403125    9924 client.go:168] LocalClient.Create starting
	I0317 12:09:40.403180    9924 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0317 12:09:40.403898    9924 main.go:141] libmachine: Decoding PEM data...
	I0317 12:09:40.403898    9924 main.go:141] libmachine: Parsing certificate...
	I0317 12:09:40.404386    9924 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0317 12:09:40.404646    9924 main.go:141] libmachine: Decoding PEM data...
	I0317 12:09:40.404733    9924 main.go:141] libmachine: Parsing certificate...
	I0317 12:09:40.404810    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0317 12:09:42.340871    9924 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0317 12:09:42.340871    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:09:42.340871    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0317 12:09:44.078261    9924 main.go:141] libmachine: [stdout =====>] : False
	
	I0317 12:09:44.078261    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:09:44.078397    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0317 12:09:45.551396    9924 main.go:141] libmachine: [stdout =====>] : True
	
	I0317 12:09:45.552302    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:09:45.552302    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0317 12:09:49.341707    9924 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0317 12:09:49.341707    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:09:49.343785    9924 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0317 12:09:49.872867    9924 main.go:141] libmachine: Creating SSH key...
	I0317 12:09:49.979165    9924 main.go:141] libmachine: Creating VM...
	I0317 12:09:49.979165    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0317 12:09:52.942855    9924 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0317 12:09:52.943117    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:09:52.943117    9924 main.go:141] libmachine: Using switch "Default Switch"
	I0317 12:09:52.943117    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0317 12:09:54.750683    9924 main.go:141] libmachine: [stdout =====>] : True
	
	I0317 12:09:54.750683    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:09:54.750683    9924 main.go:141] libmachine: Creating VHD
	I0317 12:09:54.751751    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-781100-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0317 12:09:58.626263    9924 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-781100-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : F04FA53C-4B89-43E9-9809-F33C4052B5C6
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0317 12:09:58.626346    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:09:58.626346    9924 main.go:141] libmachine: Writing magic tar header
	I0317 12:09:58.626455    9924 main.go:141] libmachine: Writing SSH key tar header
	I0317 12:09:58.638093    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-781100-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-781100-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0317 12:10:01.852293    9924 main.go:141] libmachine: [stdout =====>] : 
	I0317 12:10:01.852293    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:10:01.852515    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-781100-m02\disk.vhd' -SizeBytes 20000MB
	I0317 12:10:04.457155    9924 main.go:141] libmachine: [stdout =====>] : 
	I0317 12:10:04.457303    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:10:04.457355    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-781100-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-781100-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0317 12:10:08.233501    9924 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-781100-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0317 12:10:08.233830    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:10:08.233881    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-781100-m02 -DynamicMemoryEnabled $false
	I0317 12:10:10.625116    9924 main.go:141] libmachine: [stdout =====>] : 
	I0317 12:10:10.625445    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:10:10.625445    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-781100-m02 -Count 2
	I0317 12:10:12.889208    9924 main.go:141] libmachine: [stdout =====>] : 
	I0317 12:10:12.889208    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:10:12.889928    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-781100-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-781100-m02\boot2docker.iso'
	I0317 12:10:15.557651    9924 main.go:141] libmachine: [stdout =====>] : 
	I0317 12:10:15.558587    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:10:15.558861    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-781100-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-781100-m02\disk.vhd'
	I0317 12:10:18.331322    9924 main.go:141] libmachine: [stdout =====>] : 
	I0317 12:10:18.331322    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:10:18.331322    9924 main.go:141] libmachine: Starting VM...
	I0317 12:10:18.331322    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-781100-m02
	I0317 12:10:21.583554    9924 main.go:141] libmachine: [stdout =====>] : 
	I0317 12:10:21.584156    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:10:21.584195    9924 main.go:141] libmachine: Waiting for host to start...
	I0317 12:10:21.584222    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100-m02 ).state
	I0317 12:10:23.897760    9924 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:10:23.898843    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:10:23.899067    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 12:10:26.482937    9924 main.go:141] libmachine: [stdout =====>] : 
	I0317 12:10:26.482937    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:10:27.483090    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100-m02 ).state
	I0317 12:10:29.780688    9924 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:10:29.780688    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:10:29.781412    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 12:10:32.323927    9924 main.go:141] libmachine: [stdout =====>] : 
	I0317 12:10:32.323927    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:10:33.324922    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100-m02 ).state
	I0317 12:10:35.507749    9924 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:10:35.507749    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:10:35.508837    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 12:10:38.161616    9924 main.go:141] libmachine: [stdout =====>] : 
	I0317 12:10:38.161616    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:10:39.162675    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100-m02 ).state
	I0317 12:10:41.456835    9924 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:10:41.456835    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:10:41.456835    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 12:10:44.022696    9924 main.go:141] libmachine: [stdout =====>] : 
	I0317 12:10:44.022696    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:10:45.023542    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100-m02 ).state
	I0317 12:10:47.281070    9924 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:10:47.281070    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:10:47.281796    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 12:10:49.940186    9924 main.go:141] libmachine: [stdout =====>] : 172.25.25.119
	
	I0317 12:10:49.940186    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:10:49.941203    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100-m02 ).state
	I0317 12:10:52.140145    9924 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:10:52.141153    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:10:52.141153    9924 machine.go:93] provisionDockerMachine start ...
	I0317 12:10:52.141153    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100-m02 ).state
	I0317 12:10:54.323432    9924 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:10:54.323432    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:10:54.323536    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 12:10:56.867581    9924 main.go:141] libmachine: [stdout =====>] : 172.25.25.119
	
	I0317 12:10:56.867581    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:10:56.874095    9924 main.go:141] libmachine: Using SSH client type: native
	I0317 12:10:56.886972    9924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.25.119 22 <nil> <nil>}
	I0317 12:10:56.886972    9924 main.go:141] libmachine: About to run SSH command:
	hostname
	I0317 12:10:57.013203    9924 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0317 12:10:57.013277    9924 buildroot.go:166] provisioning hostname "multinode-781100-m02"
	I0317 12:10:57.013277    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100-m02 ).state
	I0317 12:10:59.198206    9924 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:10:59.198206    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:10:59.199085    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 12:11:01.793848    9924 main.go:141] libmachine: [stdout =====>] : 172.25.25.119
	
	I0317 12:11:01.793848    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:11:01.799034    9924 main.go:141] libmachine: Using SSH client type: native
	I0317 12:11:01.799868    9924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.25.119 22 <nil> <nil>}
	I0317 12:11:01.799868    9924 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-781100-m02 && echo "multinode-781100-m02" | sudo tee /etc/hostname
	I0317 12:11:01.964628    9924 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-781100-m02
	
	I0317 12:11:01.964700    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100-m02 ).state
	I0317 12:11:04.131870    9924 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:11:04.131870    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:11:04.132901    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 12:11:06.686931    9924 main.go:141] libmachine: [stdout =====>] : 172.25.25.119
	
	I0317 12:11:06.686931    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:11:06.692795    9924 main.go:141] libmachine: Using SSH client type: native
	I0317 12:11:06.693492    9924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.25.119 22 <nil> <nil>}
	I0317 12:11:06.693492    9924 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-781100-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-781100-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-781100-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0317 12:11:06.840843    9924 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0317 12:11:06.840843    9924 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0317 12:11:06.840843    9924 buildroot.go:174] setting up certificates
	I0317 12:11:06.840843    9924 provision.go:84] configureAuth start
	I0317 12:11:06.840843    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100-m02 ).state
	I0317 12:11:08.984956    9924 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:11:08.985158    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:11:08.985300    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 12:11:11.609468    9924 main.go:141] libmachine: [stdout =====>] : 172.25.25.119
	
	I0317 12:11:11.609468    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:11:11.609563    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100-m02 ).state
	I0317 12:11:13.786496    9924 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:11:13.787407    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:11:13.787604    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 12:11:16.402825    9924 main.go:141] libmachine: [stdout =====>] : 172.25.25.119
	
	I0317 12:11:16.402825    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:11:16.403209    9924 provision.go:143] copyHostCerts
	I0317 12:11:16.403209    9924 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0317 12:11:16.403209    9924 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0317 12:11:16.403209    9924 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0317 12:11:16.404043    9924 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0317 12:11:16.405264    9924 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0317 12:11:16.405477    9924 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0317 12:11:16.405634    9924 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0317 12:11:16.405991    9924 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0317 12:11:16.407058    9924 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0317 12:11:16.407309    9924 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0317 12:11:16.407378    9924 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0317 12:11:16.407757    9924 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0317 12:11:16.408884    9924 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-781100-m02 san=[127.0.0.1 172.25.25.119 localhost minikube multinode-781100-m02]
	I0317 12:11:16.778597    9924 provision.go:177] copyRemoteCerts
	I0317 12:11:16.792431    9924 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0317 12:11:16.792431    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100-m02 ).state
	I0317 12:11:18.949764    9924 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:11:18.950003    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:11:18.950055    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 12:11:21.541505    9924 main.go:141] libmachine: [stdout =====>] : 172.25.25.119
	
	I0317 12:11:21.542469    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:11:21.542886    9924 sshutil.go:53] new ssh client: &{IP:172.25.25.119 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-781100-m02\id_rsa Username:docker}
	I0317 12:11:21.657291    9924 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8648104s)
	I0317 12:11:21.657291    9924 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0317 12:11:21.657291    9924 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0317 12:11:21.703633    9924 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0317 12:11:21.704055    9924 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0317 12:11:21.747440    9924 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0317 12:11:21.747440    9924 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0317 12:11:21.793992    9924 provision.go:87] duration metric: took 14.952998s to configureAuth
	I0317 12:11:21.793992    9924 buildroot.go:189] setting minikube options for container-runtime
	I0317 12:11:21.794522    9924 config.go:182] Loaded profile config "multinode-781100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0317 12:11:21.794522    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100-m02 ).state
	I0317 12:11:23.972785    9924 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:11:23.972785    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:11:23.973502    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 12:11:26.560660    9924 main.go:141] libmachine: [stdout =====>] : 172.25.25.119
	
	I0317 12:11:26.560660    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:11:26.569660    9924 main.go:141] libmachine: Using SSH client type: native
	I0317 12:11:26.570315    9924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.25.119 22 <nil> <nil>}
	I0317 12:11:26.570315    9924 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0317 12:11:26.701517    9924 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0317 12:11:26.701517    9924 buildroot.go:70] root file system type: tmpfs
	I0317 12:11:26.701766    9924 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0317 12:11:26.701847    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100-m02 ).state
	I0317 12:11:28.875938    9924 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:11:28.875938    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:11:28.876119    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 12:11:31.480272    9924 main.go:141] libmachine: [stdout =====>] : 172.25.25.119
	
	I0317 12:11:31.481096    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:11:31.487079    9924 main.go:141] libmachine: Using SSH client type: native
	I0317 12:11:31.487849    9924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.25.119 22 <nil> <nil>}
	I0317 12:11:31.487849    9924 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.25.16.124"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0317 12:11:31.650628    9924 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.25.16.124
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0317 12:11:31.650628    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100-m02 ).state
	I0317 12:11:33.846381    9924 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:11:33.847200    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:11:33.847200    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 12:11:36.458457    9924 main.go:141] libmachine: [stdout =====>] : 172.25.25.119
	
	I0317 12:11:36.458867    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:11:36.464455    9924 main.go:141] libmachine: Using SSH client type: native
	I0317 12:11:36.465195    9924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.25.119 22 <nil> <nil>}
	I0317 12:11:36.465195    9924 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0317 12:11:38.752069    9924 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0317 12:11:38.752150    9924 machine.go:96] duration metric: took 46.6105255s to provisionDockerMachine
	I0317 12:11:38.752150    9924 client.go:171] duration metric: took 1m58.3478298s to LocalClient.Create
	I0317 12:11:38.752258    9924 start.go:167] duration metric: took 1m58.347961s to libmachine.API.Create "multinode-781100"
	I0317 12:11:38.752258    9924 start.go:293] postStartSetup for "multinode-781100-m02" (driver="hyperv")
	I0317 12:11:38.752258    9924 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0317 12:11:38.763515    9924 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0317 12:11:38.763515    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100-m02 ).state
	I0317 12:11:40.980191    9924 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:11:40.980191    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:11:40.980808    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 12:11:43.572244    9924 main.go:141] libmachine: [stdout =====>] : 172.25.25.119
	
	I0317 12:11:43.572738    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:11:43.573033    9924 sshutil.go:53] new ssh client: &{IP:172.25.25.119 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-781100-m02\id_rsa Username:docker}
	I0317 12:11:43.686682    9924 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9230417s)
	I0317 12:11:43.700669    9924 ssh_runner.go:195] Run: cat /etc/os-release
	I0317 12:11:43.708345    9924 command_runner.go:130] > NAME=Buildroot
	I0317 12:11:43.708345    9924 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0317 12:11:43.708345    9924 command_runner.go:130] > ID=buildroot
	I0317 12:11:43.708345    9924 command_runner.go:130] > VERSION_ID=2023.02.9
	I0317 12:11:43.708345    9924 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0317 12:11:43.708345    9924 info.go:137] Remote host: Buildroot 2023.02.9
	I0317 12:11:43.708472    9924 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0317 12:11:43.708643    9924 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0317 12:11:43.709878    9924 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem -> 89402.pem in /etc/ssl/certs
	I0317 12:11:43.709878    9924 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem -> /etc/ssl/certs/89402.pem
	I0317 12:11:43.721366    9924 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0317 12:11:43.739116    9924 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem --> /etc/ssl/certs/89402.pem (1708 bytes)
	I0317 12:11:43.786495    9924 start.go:296] duration metric: took 5.034186s for postStartSetup
	I0317 12:11:43.789938    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100-m02 ).state
	I0317 12:11:45.960214    9924 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:11:45.960214    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:11:45.960585    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 12:11:48.553939    9924 main.go:141] libmachine: [stdout =====>] : 172.25.25.119
	
	I0317 12:11:48.553939    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:11:48.554989    9924 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-781100\config.json ...
	I0317 12:11:48.557434    9924 start.go:128] duration metric: took 2m8.1561568s to createHost
	I0317 12:11:48.557434    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100-m02 ).state
	I0317 12:11:50.704560    9924 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:11:50.704560    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:11:50.704560    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 12:11:53.255034    9924 main.go:141] libmachine: [stdout =====>] : 172.25.25.119
	
	I0317 12:11:53.255034    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:11:53.262046    9924 main.go:141] libmachine: Using SSH client type: native
	I0317 12:11:53.262817    9924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.25.119 22 <nil> <nil>}
	I0317 12:11:53.262817    9924 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0317 12:11:53.391769    9924 main.go:141] libmachine: SSH cmd err, output: <nil>: 1742213513.417399506
	
	I0317 12:11:53.391769    9924 fix.go:216] guest clock: 1742213513.417399506
	I0317 12:11:53.391769    9924 fix.go:229] Guest: 2025-03-17 12:11:53.417399506 +0000 UTC Remote: 2025-03-17 12:11:48.5574343 +0000 UTC m=+349.849791901 (delta=4.859965206s)
	I0317 12:11:53.391940    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100-m02 ).state
	I0317 12:11:55.510201    9924 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:11:55.510619    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:11:55.510711    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 12:11:58.098550    9924 main.go:141] libmachine: [stdout =====>] : 172.25.25.119
	
	I0317 12:11:58.098762    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:11:58.104421    9924 main.go:141] libmachine: Using SSH client type: native
	I0317 12:11:58.105130    9924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.25.119 22 <nil> <nil>}
	I0317 12:11:58.105130    9924 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1742213513
	I0317 12:11:58.250656    9924 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Mar 17 12:11:53 UTC 2025
	
	I0317 12:11:58.250656    9924 fix.go:236] clock set: Mon Mar 17 12:11:53 UTC 2025
	 (err=<nil>)
	I0317 12:11:58.250656    9924 start.go:83] releasing machines lock for "multinode-781100-m02", held for 2m17.8494763s
	I0317 12:11:58.250656    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100-m02 ).state
	I0317 12:12:00.441709    9924 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:12:00.442510    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:12:00.442623    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 12:12:03.157950    9924 main.go:141] libmachine: [stdout =====>] : 172.25.25.119
	
	I0317 12:12:03.157950    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:12:03.161218    9924 out.go:177] * Found network options:
	I0317 12:12:03.164599    9924 out.go:177]   - NO_PROXY=172.25.16.124
	W0317 12:12:03.167203    9924 proxy.go:119] fail to check proxy env: Error ip not in block
	I0317 12:12:03.169613    9924 out.go:177]   - NO_PROXY=172.25.16.124
	W0317 12:12:03.171658    9924 proxy.go:119] fail to check proxy env: Error ip not in block
	W0317 12:12:03.173900    9924 proxy.go:119] fail to check proxy env: Error ip not in block
	I0317 12:12:03.175904    9924 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0317 12:12:03.175904    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100-m02 ).state
	I0317 12:12:03.184951    9924 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0317 12:12:03.184951    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100-m02 ).state
	I0317 12:12:05.452598    9924 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:12:05.452598    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:12:05.452598    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 12:12:05.499878    9924 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:12:05.500901    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:12:05.500901    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 12:12:08.198594    9924 main.go:141] libmachine: [stdout =====>] : 172.25.25.119
	
	I0317 12:12:08.198594    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:12:08.198910    9924 sshutil.go:53] new ssh client: &{IP:172.25.25.119 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-781100-m02\id_rsa Username:docker}
	I0317 12:12:08.224403    9924 main.go:141] libmachine: [stdout =====>] : 172.25.25.119
	
	I0317 12:12:08.224403    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:12:08.225416    9924 sshutil.go:53] new ssh client: &{IP:172.25.25.119 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-781100-m02\id_rsa Username:docker}
	I0317 12:12:08.303664    9924 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0317 12:12:08.303727    9924 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.1277707s)
	W0317 12:12:08.303727    9924 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0317 12:12:08.321834    9924 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0317 12:12:08.321834    9924 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.1368314s)
	W0317 12:12:08.321834    9924 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0317 12:12:08.334038    9924 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0317 12:12:08.365449    9924 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0317 12:12:08.365689    9924 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0317 12:12:08.365689    9924 start.go:495] detecting cgroup driver to use...
	I0317 12:12:08.365952    9924 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0317 12:12:08.400604    9924 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0317 12:12:08.412598    9924 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0317 12:12:08.443181    9924 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0317 12:12:08.462625    9924 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0317 12:12:08.475219    9924 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	W0317 12:12:08.475219    9924 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0317 12:12:08.475219    9924 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0317 12:12:08.506591    9924 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0317 12:12:08.542995    9924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0317 12:12:08.575004    9924 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0317 12:12:08.605576    9924 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0317 12:12:08.637966    9924 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0317 12:12:08.675388    9924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0317 12:12:08.708290    9924 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0317 12:12:08.741142    9924 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0317 12:12:08.758732    9924 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0317 12:12:08.759615    9924 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0317 12:12:08.772999    9924 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0317 12:12:08.810641    9924 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0317 12:12:08.846169    9924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 12:12:09.049849    9924 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0317 12:12:09.086211    9924 start.go:495] detecting cgroup driver to use...
	I0317 12:12:09.099955    9924 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0317 12:12:09.130518    9924 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0317 12:12:09.130584    9924 command_runner.go:130] > [Unit]
	I0317 12:12:09.130584    9924 command_runner.go:130] > Description=Docker Application Container Engine
	I0317 12:12:09.130584    9924 command_runner.go:130] > Documentation=https://docs.docker.com
	I0317 12:12:09.130584    9924 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0317 12:12:09.130584    9924 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0317 12:12:09.130662    9924 command_runner.go:130] > StartLimitBurst=3
	I0317 12:12:09.130662    9924 command_runner.go:130] > StartLimitIntervalSec=60
	I0317 12:12:09.130662    9924 command_runner.go:130] > [Service]
	I0317 12:12:09.130662    9924 command_runner.go:130] > Type=notify
	I0317 12:12:09.130662    9924 command_runner.go:130] > Restart=on-failure
	I0317 12:12:09.130662    9924 command_runner.go:130] > Environment=NO_PROXY=172.25.16.124
	I0317 12:12:09.130662    9924 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0317 12:12:09.130662    9924 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0317 12:12:09.130662    9924 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0317 12:12:09.130662    9924 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0317 12:12:09.130662    9924 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0317 12:12:09.130662    9924 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0317 12:12:09.130662    9924 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0317 12:12:09.130662    9924 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0317 12:12:09.130662    9924 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0317 12:12:09.130662    9924 command_runner.go:130] > ExecStart=
	I0317 12:12:09.130662    9924 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0317 12:12:09.130662    9924 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0317 12:12:09.130662    9924 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0317 12:12:09.130662    9924 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0317 12:12:09.130662    9924 command_runner.go:130] > LimitNOFILE=infinity
	I0317 12:12:09.130662    9924 command_runner.go:130] > LimitNPROC=infinity
	I0317 12:12:09.130662    9924 command_runner.go:130] > LimitCORE=infinity
	I0317 12:12:09.130662    9924 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0317 12:12:09.130662    9924 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0317 12:12:09.130662    9924 command_runner.go:130] > TasksMax=infinity
	I0317 12:12:09.130662    9924 command_runner.go:130] > TimeoutStartSec=0
	I0317 12:12:09.130662    9924 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0317 12:12:09.130662    9924 command_runner.go:130] > Delegate=yes
	I0317 12:12:09.130662    9924 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0317 12:12:09.130662    9924 command_runner.go:130] > KillMode=process
	I0317 12:12:09.130662    9924 command_runner.go:130] > [Install]
	I0317 12:12:09.130662    9924 command_runner.go:130] > WantedBy=multi-user.target
	I0317 12:12:09.144161    9924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0317 12:12:09.180708    9924 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0317 12:12:09.231764    9924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0317 12:12:09.270947    9924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0317 12:12:09.309222    9924 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0317 12:12:09.375185    9924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0317 12:12:09.399420    9924 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0317 12:12:09.434022    9924 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0317 12:12:09.445497    9924 ssh_runner.go:195] Run: which cri-dockerd
	I0317 12:12:09.452130    9924 command_runner.go:130] > /usr/bin/cri-dockerd
	I0317 12:12:09.463130    9924 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0317 12:12:09.480919    9924 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0317 12:12:09.526048    9924 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0317 12:12:09.738837    9924 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0317 12:12:09.937249    9924 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0317 12:12:09.937397    9924 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0317 12:12:09.987811    9924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 12:12:10.178464    9924 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0317 12:12:12.780981    9924 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6024901s)
	I0317 12:12:12.796098    9924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0317 12:12:12.831564    9924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0317 12:12:12.866608    9924 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0317 12:12:13.064920    9924 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0317 12:12:13.270591    9924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 12:12:13.479027    9924 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0317 12:12:13.522831    9924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0317 12:12:13.559382    9924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 12:12:13.745900    9924 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0317 12:12:13.855577    9924 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0317 12:12:13.868881    9924 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0317 12:12:13.880466    9924 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0317 12:12:13.880466    9924 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0317 12:12:13.880466    9924 command_runner.go:130] > Device: 0,22	Inode: 876         Links: 1
	I0317 12:12:13.880533    9924 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0317 12:12:13.880533    9924 command_runner.go:130] > Access: 2025-03-17 12:12:13.798433304 +0000
	I0317 12:12:13.880533    9924 command_runner.go:130] > Modify: 2025-03-17 12:12:13.798433304 +0000
	I0317 12:12:13.880533    9924 command_runner.go:130] > Change: 2025-03-17 12:12:13.802433321 +0000
	I0317 12:12:13.880533    9924 command_runner.go:130] >  Birth: -
	I0317 12:12:13.881169    9924 start.go:563] Will wait 60s for crictl version
	I0317 12:12:13.894461    9924 ssh_runner.go:195] Run: which crictl
	I0317 12:12:13.900710    9924 command_runner.go:130] > /usr/bin/crictl
	I0317 12:12:13.911924    9924 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0317 12:12:13.964660    9924 command_runner.go:130] > Version:  0.1.0
	I0317 12:12:13.964797    9924 command_runner.go:130] > RuntimeName:  docker
	I0317 12:12:13.964797    9924 command_runner.go:130] > RuntimeVersion:  27.4.0
	I0317 12:12:13.964797    9924 command_runner.go:130] > RuntimeApiVersion:  v1
	I0317 12:12:13.964797    9924 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0317 12:12:13.973773    9924 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0317 12:12:14.008786    9924 command_runner.go:130] > 27.4.0
	I0317 12:12:14.018942    9924 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0317 12:12:14.060502    9924 command_runner.go:130] > 27.4.0
	I0317 12:12:14.065447    9924 out.go:235] * Preparing Kubernetes v1.32.2 on Docker 27.4.0 ...
	I0317 12:12:14.068316    9924 out.go:177]   - env NO_PROXY=172.25.16.124
	I0317 12:12:14.071234    9924 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0317 12:12:14.075760    9924 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0317 12:12:14.075760    9924 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0317 12:12:14.075760    9924 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0317 12:12:14.075760    9924 ip.go:211] Found interface: {Index:13 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:4b:84:d5 Flags:up|broadcast|multicast|running}
	I0317 12:12:14.079226    9924 ip.go:214] interface addr: fe80::f0c7:c31c:6237:ef35/64
	I0317 12:12:14.079291    9924 ip.go:214] interface addr: 172.25.16.1/20
	I0317 12:12:14.093567    9924 ssh_runner.go:195] Run: grep 172.25.16.1	host.minikube.internal$ /etc/hosts
	I0317 12:12:14.099362    9924 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.25.16.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 12:12:14.119216    9924 mustload.go:65] Loading cluster: multinode-781100
	I0317 12:12:14.120057    9924 config.go:182] Loaded profile config "multinode-781100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0317 12:12:14.120219    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100 ).state
	I0317 12:12:16.329745    9924 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:12:16.329745    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:12:16.330238    9924 host.go:66] Checking if "multinode-781100" exists ...
	I0317 12:12:16.331200    9924 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-781100 for IP: 172.25.25.119
	I0317 12:12:16.331278    9924 certs.go:194] generating shared ca certs ...
	I0317 12:12:16.331360    9924 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:12:16.331360    9924 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0317 12:12:16.332429    9924 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0317 12:12:16.332759    9924 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0317 12:12:16.332759    9924 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0317 12:12:16.332759    9924 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0317 12:12:16.333297    9924 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0317 12:12:16.333896    9924 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8940.pem (1338 bytes)
	W0317 12:12:16.334244    9924 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8940_empty.pem, impossibly tiny 0 bytes
	I0317 12:12:16.334414    9924 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0317 12:12:16.334414    9924 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0317 12:12:16.334414    9924 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0317 12:12:16.335182    9924 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0317 12:12:16.335689    9924 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem (1708 bytes)
	I0317 12:12:16.335953    9924 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0317 12:12:16.336097    9924 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8940.pem -> /usr/share/ca-certificates/8940.pem
	I0317 12:12:16.336267    9924 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem -> /usr/share/ca-certificates/89402.pem
	I0317 12:12:16.336267    9924 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0317 12:12:16.385683    9924 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0317 12:12:16.434587    9924 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0317 12:12:16.477940    9924 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0317 12:12:16.523758    9924 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0317 12:12:16.567911    9924 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8940.pem --> /usr/share/ca-certificates/8940.pem (1338 bytes)
	I0317 12:12:16.609788    9924 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem --> /usr/share/ca-certificates/89402.pem (1708 bytes)
	I0317 12:12:16.665937    9924 ssh_runner.go:195] Run: openssl version
	I0317 12:12:16.675273    9924 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0317 12:12:16.685483    9924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0317 12:12:16.721185    9924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0317 12:12:16.729341    9924 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar 17 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0317 12:12:16.729341    9924 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 17 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0317 12:12:16.740811    9924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0317 12:12:16.751467    9924 command_runner.go:130] > b5213941
	I0317 12:12:16.763229    9924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0317 12:12:16.794541    9924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8940.pem && ln -fs /usr/share/ca-certificates/8940.pem /etc/ssl/certs/8940.pem"
	I0317 12:12:16.824262    9924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8940.pem
	I0317 12:12:16.831486    9924 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar 17 10:46 /usr/share/ca-certificates/8940.pem
	I0317 12:12:16.831486    9924 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 17 10:46 /usr/share/ca-certificates/8940.pem
	I0317 12:12:16.842321    9924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8940.pem
	I0317 12:12:16.850485    9924 command_runner.go:130] > 51391683
	I0317 12:12:16.861041    9924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8940.pem /etc/ssl/certs/51391683.0"
	I0317 12:12:16.891989    9924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/89402.pem && ln -fs /usr/share/ca-certificates/89402.pem /etc/ssl/certs/89402.pem"
	I0317 12:12:16.922863    9924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/89402.pem
	I0317 12:12:16.929648    9924 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar 17 10:46 /usr/share/ca-certificates/89402.pem
	I0317 12:12:16.929648    9924 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 17 10:46 /usr/share/ca-certificates/89402.pem
	I0317 12:12:16.943128    9924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/89402.pem
	I0317 12:12:16.952788    9924 command_runner.go:130] > 3ec20f2e
	I0317 12:12:16.965291    9924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/89402.pem /etc/ssl/certs/3ec20f2e.0"
	I0317 12:12:16.996131    9924 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0317 12:12:17.002499    9924 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0317 12:12:17.003123    9924 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0317 12:12:17.003330    9924 kubeadm.go:934] updating node {m02 172.25.25.119 8443 v1.32.2 docker false true} ...
	I0317 12:12:17.003584    9924 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-781100-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.25.119
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:multinode-781100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0317 12:12:17.014431    9924 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0317 12:12:17.031681    9924 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.32.2': No such file or directory
	I0317 12:12:17.033465    9924 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.32.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.32.2': No such file or directory
	
	Initiating transfer...
	I0317 12:12:17.045846    9924 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.32.2
	I0317 12:12:17.063716    9924 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256
	I0317 12:12:17.063716    9924 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet.sha256
	I0317 12:12:17.063716    9924 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256
	I0317 12:12:17.063716    9924 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubectl -> /var/lib/minikube/binaries/v1.32.2/kubectl
	I0317 12:12:17.063716    9924 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubeadm -> /var/lib/minikube/binaries/v1.32.2/kubeadm
	I0317 12:12:17.079765    9924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0317 12:12:17.079765    9924 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl
	I0317 12:12:17.079765    9924 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubeadm
	I0317 12:12:17.110358    9924 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubectl': No such file or directory
	I0317 12:12:17.110358    9924 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubectl': No such file or directory
	I0317 12:12:17.110358    9924 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubelet -> /var/lib/minikube/binaries/v1.32.2/kubelet
	I0317 12:12:17.110637    9924 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubectl --> /var/lib/minikube/binaries/v1.32.2/kubectl (57323672 bytes)
	I0317 12:12:17.110796    9924 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubeadm': No such file or directory
	I0317 12:12:17.111089    9924 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubeadm': No such file or directory
	I0317 12:12:17.111283    9924 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubeadm --> /var/lib/minikube/binaries/v1.32.2/kubeadm (70942872 bytes)
	I0317 12:12:17.123140    9924 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet
	I0317 12:12:17.205552    9924 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubelet': No such file or directory
	I0317 12:12:17.205552    9924 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubelet': No such file or directory
	I0317 12:12:17.205552    9924 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubelet --> /var/lib/minikube/binaries/v1.32.2/kubelet (77406468 bytes)
	I0317 12:12:18.447389    9924 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0317 12:12:18.465308    9924 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I0317 12:12:18.499016    9924 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0317 12:12:18.543802    9924 ssh_runner.go:195] Run: grep 172.25.16.124	control-plane.minikube.internal$ /etc/hosts
	I0317 12:12:18.550872    9924 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.25.16.124	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 12:12:18.591951    9924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 12:12:18.800665    9924 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 12:12:18.834072    9924 host.go:66] Checking if "multinode-781100" exists ...
	I0317 12:12:18.834981    9924 start.go:317] joinCluster: &{Name:multinode-781100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:multinode-781100
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.16.124 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.25.25.119 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 12:12:18.834981    9924 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0317 12:12:18.834981    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100 ).state
	I0317 12:12:21.065393    9924 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:12:21.065540    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:12:21.065540    9924 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100 ).networkadapters[0]).ipaddresses[0]
	I0317 12:12:23.754114    9924 main.go:141] libmachine: [stdout =====>] : 172.25.16.124
	
	I0317 12:12:23.754114    9924 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:12:23.754312    9924 sshutil.go:53] new ssh client: &{IP:172.25.16.124 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-781100\id_rsa Username:docker}
	I0317 12:12:24.229815    9924 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token fzdp83.eryci96rc34tev3l --discovery-token-ca-cert-hash sha256:c322b0259bb8a6b4c6c1dc77ade13bbf0d2f6b9bd2605c58fcd3743199330256 
	I0317 12:12:24.229874    9924 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm token create --print-join-command --ttl=0": (5.3948385s)
	I0317 12:12:24.229874    9924 start.go:343] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.25.25.119 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0317 12:12:24.229874    9924 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token fzdp83.eryci96rc34tev3l --discovery-token-ca-cert-hash sha256:c322b0259bb8a6b4c6c1dc77ade13bbf0d2f6b9bd2605c58fcd3743199330256 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-781100-m02"
	I0317 12:12:24.413956    9924 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0317 12:12:25.751315    9924 command_runner.go:130] > [preflight] Running pre-flight checks
	I0317 12:12:25.751403    9924 command_runner.go:130] > [preflight] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
	I0317 12:12:25.751403    9924 command_runner.go:130] > [preflight] Use 'kubeadm init phase upload-config --config your-config.yaml' to re-upload it.
	I0317 12:12:25.751403    9924 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0317 12:12:25.751403    9924 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0317 12:12:25.751403    9924 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0317 12:12:25.751594    9924 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0317 12:12:25.751594    9924 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 501.988613ms
	I0317 12:12:25.751594    9924 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0317 12:12:25.751594    9924 command_runner.go:130] > This node has joined the cluster:
	I0317 12:12:25.751679    9924 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0317 12:12:25.751679    9924 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0317 12:12:25.751679    9924 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0317 12:12:25.751745    9924 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token fzdp83.eryci96rc34tev3l --discovery-token-ca-cert-hash sha256:c322b0259bb8a6b4c6c1dc77ade13bbf0d2f6b9bd2605c58fcd3743199330256 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-781100-m02": (1.5217904s)
	I0317 12:12:25.751745    9924 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0317 12:12:25.981751    9924 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0317 12:12:26.210467    9924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-781100-m02 minikube.k8s.io/updated_at=2025_03_17T12_12_26_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=28b3ce799b018a38b7c40f89b465976263272e76 minikube.k8s.io/name=multinode-781100 minikube.k8s.io/primary=false
	I0317 12:12:26.353708    9924 command_runner.go:130] > node/multinode-781100-m02 labeled
	I0317 12:12:26.353911    9924 start.go:319] duration metric: took 7.5188096s to joinCluster
	I0317 12:12:26.354105    9924 start.go:235] Will wait 6m0s for node &{Name:m02 IP:172.25.25.119 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0317 12:12:26.354368    9924 config.go:182] Loaded profile config "multinode-781100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0317 12:12:26.357035    9924 out.go:177] * Verifying Kubernetes components...
	I0317 12:12:26.373276    9924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 12:12:26.608473    9924 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 12:12:26.634792    9924 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0317 12:12:26.636078    9924 kapi.go:59] client config for multinode-781100: &rest.Config{Host:"https://172.25.16.124:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-781100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-781100\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2e292e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0317 12:12:26.637561    9924 node_ready.go:35] waiting up to 6m0s for node "multinode-781100-m02" to be "Ready" ...
	I0317 12:12:26.637753    9924 type.go:168] "Request Body" body=""
	I0317 12:12:26.637753    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:26.637753    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:26.637753    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:26.637753    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:26.650377    9924 round_trippers.go:581] Response Status: 200 OK in 12 milliseconds
	I0317 12:12:26.650377    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:26.650377    9924 round_trippers.go:587]     Audit-Id: 5e419067-927a-4c38-888f-4d409475d4d8
	I0317 12:12:26.650377    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:26.650377    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:26.650377    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:26.650377    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:26.650496    9924 round_trippers.go:587]     Content-Length: 2719
	I0317 12:12:26.650496    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:26 GMT
	I0317 12:12:26.650677    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 88 15 0a 9c 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 31 35 38 00 42  |1cdec8612.6158.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12405 chars]
	 >
	I0317 12:12:27.138008    9924 type.go:168] "Request Body" body=""
	I0317 12:12:27.138008    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:27.138008    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:27.138008    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:27.138008    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:27.143382    9924 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 12:12:27.143908    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:27.143908    9924 round_trippers.go:587]     Audit-Id: d85a6122-f1df-4574-baa0-994210e5deb8
	I0317 12:12:27.143908    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:27.143908    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:27.143908    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:27.143908    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:27.143908    9924 round_trippers.go:587]     Content-Length: 2719
	I0317 12:12:27.143908    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:27 GMT
	I0317 12:12:27.144666    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 88 15 0a 9c 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 31 35 38 00 42  |1cdec8612.6158.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12405 chars]
	 >
	I0317 12:12:27.638208    9924 type.go:168] "Request Body" body=""
	I0317 12:12:27.638656    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:27.638759    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:27.638759    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:27.638759    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:27.644422    9924 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 12:12:27.644508    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:27.644535    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:27.644549    9924 round_trippers.go:587]     Content-Length: 2719
	I0317 12:12:27.644549    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:27 GMT
	I0317 12:12:27.644549    9924 round_trippers.go:587]     Audit-Id: a940b169-fa48-4b70-835c-42cd85538c4b
	I0317 12:12:27.644549    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:27.644549    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:27.644549    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:27.644673    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 88 15 0a 9c 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 31 35 38 00 42  |1cdec8612.6158.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12405 chars]
	 >
	I0317 12:12:28.138615    9924 type.go:168] "Request Body" body=""
	I0317 12:12:28.138615    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:28.138615    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:28.138615    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:28.138615    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:28.142665    9924 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 12:12:28.143595    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:28.143595    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:28.143595    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:28.143595    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:28.143595    9924 round_trippers.go:587]     Content-Length: 2719
	I0317 12:12:28.143666    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:28 GMT
	I0317 12:12:28.143666    9924 round_trippers.go:587]     Audit-Id: 16c57de6-6f21-40f2-8ba0-753b3f0fdde0
	I0317 12:12:28.143685    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:28.143918    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 88 15 0a 9c 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 31 35 38 00 42  |1cdec8612.6158.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12405 chars]
	 >
	I0317 12:12:28.637947    9924 type.go:168] "Request Body" body=""
	I0317 12:12:28.637947    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:28.637947    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:28.637947    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:28.637947    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:28.642463    9924 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 12:12:28.642463    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:28.642463    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:28.642463    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:28.642463    9924 round_trippers.go:587]     Content-Length: 2719
	I0317 12:12:28.642463    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:28 GMT
	I0317 12:12:28.642463    9924 round_trippers.go:587]     Audit-Id: c45550a1-04aa-49e4-82d2-7ce9351a8f8d
	I0317 12:12:28.642463    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:28.642463    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:28.642463    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 88 15 0a 9c 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 31 35 38 00 42  |1cdec8612.6158.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12405 chars]
	 >
	I0317 12:12:28.643012    9924 node_ready.go:53] node "multinode-781100-m02" has status "Ready":"False"
	I0317 12:12:29.138749    9924 type.go:168] "Request Body" body=""
	I0317 12:12:29.138749    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:29.138749    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:29.138749    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:29.138749    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:29.143707    9924 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 12:12:29.143707    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:29.143707    9924 round_trippers.go:587]     Audit-Id: 2c96e5e6-35e5-4a48-9bdf-f8e3da380787
	I0317 12:12:29.143868    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:29.143868    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:29.143925    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:29.143925    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:29.143925    9924 round_trippers.go:587]     Content-Length: 2789
	I0317 12:12:29.143925    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:29 GMT
	I0317 12:12:29.144016    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ce 15 0a aa 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 32 31 38 00 42  |1cdec8612.6218.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12790 chars]
	 >
	I0317 12:12:29.638370    9924 type.go:168] "Request Body" body=""
	I0317 12:12:29.638370    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:29.638370    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:29.638370    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:29.638370    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:29.643401    9924 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 12:12:29.643500    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:29.643500    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:29.643500    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:29.643562    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:29.643562    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:29.643562    9924 round_trippers.go:587]     Content-Length: 2789
	I0317 12:12:29.643562    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:29 GMT
	I0317 12:12:29.643562    9924 round_trippers.go:587]     Audit-Id: 1c4ab807-f4d5-426d-bf23-d8bd2f87e87e
	I0317 12:12:29.643799    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ce 15 0a aa 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 32 31 38 00 42  |1cdec8612.6218.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12790 chars]
	 >
	I0317 12:12:30.138613    9924 type.go:168] "Request Body" body=""
	I0317 12:12:30.138613    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:30.138613    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:30.138613    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:30.138613    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:30.142903    9924 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 12:12:30.142903    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:30.142903    9924 round_trippers.go:587]     Content-Length: 2789
	I0317 12:12:30.142903    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:30 GMT
	I0317 12:12:30.142903    9924 round_trippers.go:587]     Audit-Id: 32f2f514-8f26-4aff-949e-41fb4bdbb631
	I0317 12:12:30.142903    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:30.142903    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:30.142903    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:30.142903    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:30.143177    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ce 15 0a aa 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 32 31 38 00 42  |1cdec8612.6218.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12790 chars]
	 >
	I0317 12:12:30.639228    9924 type.go:168] "Request Body" body=""
	I0317 12:12:30.639327    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:30.639395    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:30.639395    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:30.639395    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:30.643913    9924 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 12:12:30.644004    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:30.644004    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:30.644004    9924 round_trippers.go:587]     Content-Length: 2789
	I0317 12:12:30.644004    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:30 GMT
	I0317 12:12:30.644081    9924 round_trippers.go:587]     Audit-Id: 55caabf0-40a4-4c91-bdd6-abb449148bad
	I0317 12:12:30.644081    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:30.644081    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:30.644081    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:30.644230    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ce 15 0a aa 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 32 31 38 00 42  |1cdec8612.6218.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12790 chars]
	 >
	I0317 12:12:30.644230    9924 node_ready.go:53] node "multinode-781100-m02" has status "Ready":"False"
	I0317 12:12:31.139121    9924 type.go:168] "Request Body" body=""
	I0317 12:12:31.139279    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:31.139279    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:31.139370    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:31.139370    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:31.143795    9924 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 12:12:31.143795    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:31.143795    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:31.143877    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:31.143877    9924 round_trippers.go:587]     Content-Length: 2789
	I0317 12:12:31.143877    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:31 GMT
	I0317 12:12:31.143877    9924 round_trippers.go:587]     Audit-Id: 011facbd-d5b2-4392-9ce8-2d5431645083
	I0317 12:12:31.143877    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:31.143877    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:31.144183    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ce 15 0a aa 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 32 31 38 00 42  |1cdec8612.6218.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12790 chars]
	 >
	I0317 12:12:31.637827    9924 type.go:168] "Request Body" body=""
	I0317 12:12:31.637827    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:31.637827    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:31.637827    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:31.637827    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:31.642682    9924 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 12:12:31.642682    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:31.642792    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:31.642792    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:31.642792    9924 round_trippers.go:587]     Content-Length: 2789
	I0317 12:12:31.642792    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:31 GMT
	I0317 12:12:31.642792    9924 round_trippers.go:587]     Audit-Id: 1c2fe190-38d0-42b1-854e-6dd961b529e8
	I0317 12:12:31.642792    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:31.642792    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:31.643032    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ce 15 0a aa 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 32 31 38 00 42  |1cdec8612.6218.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12790 chars]
	 >
	I0317 12:12:32.138737    9924 type.go:168] "Request Body" body=""
	I0317 12:12:32.138737    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:32.138737    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:32.138737    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:32.138737    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:32.142759    9924 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 12:12:32.142833    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:32.142893    9924 round_trippers.go:587]     Audit-Id: b5ad040e-d6a8-4955-844d-ceda4e1b83f9
	I0317 12:12:32.142893    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:32.142893    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:32.142893    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:32.142893    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:32.142893    9924 round_trippers.go:587]     Content-Length: 2789
	I0317 12:12:32.142893    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:32 GMT
	I0317 12:12:32.143012    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ce 15 0a aa 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 32 31 38 00 42  |1cdec8612.6218.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12790 chars]
	 >
	I0317 12:12:32.638434    9924 type.go:168] "Request Body" body=""
	I0317 12:12:32.638989    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:32.638989    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:32.639052    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:32.639052    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:32.643938    9924 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 12:12:32.644031    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:32.644031    9924 round_trippers.go:587]     Content-Length: 2789
	I0317 12:12:32.644031    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:32 GMT
	I0317 12:12:32.644031    9924 round_trippers.go:587]     Audit-Id: f1a6040c-411d-4174-9d69-2e617b91ff75
	I0317 12:12:32.644095    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:32.644095    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:32.644095    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:32.644095    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:32.644378    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ce 15 0a aa 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 32 31 38 00 42  |1cdec8612.6218.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12790 chars]
	 >
	I0317 12:12:32.644572    9924 node_ready.go:53] node "multinode-781100-m02" has status "Ready":"False"
	I0317 12:12:33.138159    9924 type.go:168] "Request Body" body=""
	I0317 12:12:33.138159    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:33.138159    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:33.138159    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:33.138159    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:33.142326    9924 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 12:12:33.142326    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:33.142326    9924 round_trippers.go:587]     Content-Length: 2789
	I0317 12:12:33.142326    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:33 GMT
	I0317 12:12:33.142326    9924 round_trippers.go:587]     Audit-Id: c6da5c19-f7a5-4efe-8402-0c1be6fb168f
	I0317 12:12:33.142326    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:33.142326    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:33.142326    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:33.142326    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:33.142326    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ce 15 0a aa 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 32 31 38 00 42  |1cdec8612.6218.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12790 chars]
	 >
	I0317 12:12:33.638070    9924 type.go:168] "Request Body" body=""
	I0317 12:12:33.638070    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:33.638070    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:33.638070    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:33.638070    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:33.642624    9924 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 12:12:33.642624    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:33.642624    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:33 GMT
	I0317 12:12:33.642624    9924 round_trippers.go:587]     Audit-Id: d7795d8a-7cb0-4c96-b832-23cb61b93a3b
	I0317 12:12:33.642624    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:33.642624    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:33.642624    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:33.642624    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:33.642624    9924 round_trippers.go:587]     Content-Length: 2789
	I0317 12:12:33.642624    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ce 15 0a aa 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 32 31 38 00 42  |1cdec8612.6218.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12790 chars]
	 >
	I0317 12:12:34.138560    9924 type.go:168] "Request Body" body=""
	I0317 12:12:34.138560    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:34.138560    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:34.138560    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:34.138560    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:34.145243    9924 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 12:12:34.145353    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:34.145353    9924 round_trippers.go:587]     Audit-Id: 502c07fe-c2d0-4a17-89ac-21b521d50aee
	I0317 12:12:34.145353    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:34.145422    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:34.145422    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:34.145447    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:34.145475    9924 round_trippers.go:587]     Content-Length: 2789
	I0317 12:12:34.145475    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:34 GMT
	I0317 12:12:34.145475    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ce 15 0a aa 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 32 31 38 00 42  |1cdec8612.6218.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12790 chars]
	 >
	I0317 12:12:34.639360    9924 type.go:168] "Request Body" body=""
	I0317 12:12:34.639577    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:34.639577    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:34.639577    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:34.639577    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:34.643376    9924 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0317 12:12:34.643376    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:34.643376    9924 round_trippers.go:587]     Audit-Id: 7df225e1-4201-4cb6-a880-82fef8f133f3
	I0317 12:12:34.643376    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:34.643376    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:34.643376    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:34.643376    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:34.643376    9924 round_trippers.go:587]     Content-Length: 2789
	I0317 12:12:34.643376    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:34 GMT
	I0317 12:12:34.643629    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ce 15 0a aa 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 32 31 38 00 42  |1cdec8612.6218.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12790 chars]
	 >
	I0317 12:12:35.138676    9924 type.go:168] "Request Body" body=""
	I0317 12:12:35.138799    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:35.138844    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:35.138844    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:35.138886    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:35.143618    9924 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 12:12:35.143719    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:35.143719    9924 round_trippers.go:587]     Content-Length: 2789
	I0317 12:12:35.143803    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:35 GMT
	I0317 12:12:35.143803    9924 round_trippers.go:587]     Audit-Id: 543973bb-393c-4f29-9d56-93a1afb57f0f
	I0317 12:12:35.143803    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:35.143803    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:35.143803    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:35.143803    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:35.143883    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ce 15 0a aa 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 32 31 38 00 42  |1cdec8612.6218.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12790 chars]
	 >
	I0317 12:12:35.144119    9924 node_ready.go:53] node "multinode-781100-m02" has status "Ready":"False"
	I0317 12:12:35.637961    9924 type.go:168] "Request Body" body=""
	I0317 12:12:35.637961    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:35.637961    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:35.637961    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:35.637961    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:35.642744    9924 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 12:12:35.642744    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:35.642744    9924 round_trippers.go:587]     Content-Length: 3090
	I0317 12:12:35.642744    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:35 GMT
	I0317 12:12:35.642744    9924 round_trippers.go:587]     Audit-Id: 7da3ed07-15ef-4853-844f-3a167f45245c
	I0317 12:12:35.642744    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:35.642744    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:35.642744    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:35.642744    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:35.642744    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 33 30 38 00 42  |1cdec8612.6308.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0317 12:12:36.138013    9924 type.go:168] "Request Body" body=""
	I0317 12:12:36.138013    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:36.138013    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:36.138013    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:36.138013    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:36.146134    9924 round_trippers.go:581] Response Status: 200 OK in 8 milliseconds
	I0317 12:12:36.146134    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:36.146134    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:36.146134    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:36.146134    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:36.146134    9924 round_trippers.go:587]     Content-Length: 3090
	I0317 12:12:36.146134    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:36 GMT
	I0317 12:12:36.146134    9924 round_trippers.go:587]     Audit-Id: b3dcae98-bb8e-4a5b-84de-c6020def417a
	I0317 12:12:36.146134    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:36.146134    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 33 30 38 00 42  |1cdec8612.6308.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0317 12:12:36.637887    9924 type.go:168] "Request Body" body=""
	I0317 12:12:36.637887    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:36.637887    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:36.637887    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:36.637887    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:36.641798    9924 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0317 12:12:36.641798    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:36.641798    9924 round_trippers.go:587]     Audit-Id: 54b716cb-5bad-425a-84b0-bb0e69304297
	I0317 12:12:36.641798    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:36.641798    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:36.641798    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:36.641798    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:36.641798    9924 round_trippers.go:587]     Content-Length: 3090
	I0317 12:12:36.641798    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:36 GMT
	I0317 12:12:36.642890    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 33 30 38 00 42  |1cdec8612.6308.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0317 12:12:37.138261    9924 type.go:168] "Request Body" body=""
	I0317 12:12:37.138792    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:37.139075    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:37.139075    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:37.139160    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:37.146563    9924 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0317 12:12:37.146618    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:37.146618    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:37.146618    9924 round_trippers.go:587]     Content-Length: 3090
	I0317 12:12:37.146618    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:37 GMT
	I0317 12:12:37.146618    9924 round_trippers.go:587]     Audit-Id: d395b681-e89e-428f-8aab-394780965f86
	I0317 12:12:37.146618    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:37.146618    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:37.146618    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:37.146618    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 33 30 38 00 42  |1cdec8612.6308.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0317 12:12:37.146618    9924 node_ready.go:53] node "multinode-781100-m02" has status "Ready":"False"
	I0317 12:12:37.638472    9924 type.go:168] "Request Body" body=""
	I0317 12:12:37.638472    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:37.638472    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:37.638472    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:37.638472    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:37.642747    9924 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 12:12:37.642747    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:37.642815    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:37 GMT
	I0317 12:12:37.642815    9924 round_trippers.go:587]     Audit-Id: 6d942b3d-1e00-478b-a55b-6f0ae0f12980
	I0317 12:12:37.642815    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:37.642815    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:37.642815    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:37.642815    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:37.642815    9924 round_trippers.go:587]     Content-Length: 3090
	I0317 12:12:37.643228    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 33 30 38 00 42  |1cdec8612.6308.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0317 12:12:38.138100    9924 type.go:168] "Request Body" body=""
	I0317 12:12:38.138100    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:38.138100    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:38.138100    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:38.138100    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:38.161431    9924 round_trippers.go:581] Response Status: 200 OK in 23 milliseconds
	I0317 12:12:38.161492    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:38.161492    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:38.161492    9924 round_trippers.go:587]     Content-Length: 3090
	I0317 12:12:38.161585    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:38 GMT
	I0317 12:12:38.161585    9924 round_trippers.go:587]     Audit-Id: 16604f95-378a-4641-8a0a-f65d380e9df1
	I0317 12:12:38.161585    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:38.161585    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:38.161585    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:38.161765    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 33 30 38 00 42  |1cdec8612.6308.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0317 12:12:38.638075    9924 type.go:168] "Request Body" body=""
	I0317 12:12:38.638075    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:38.638075    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:38.638075    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:38.638075    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:38.642391    9924 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 12:12:38.642391    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:38.642391    9924 round_trippers.go:587]     Audit-Id: a29c59fb-dfb0-4969-8958-13e0bd936370
	I0317 12:12:38.642391    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:38.642391    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:38.642461    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:38.642461    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:38.642461    9924 round_trippers.go:587]     Content-Length: 3090
	I0317 12:12:38.642461    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:38 GMT
	I0317 12:12:38.642702    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 33 30 38 00 42  |1cdec8612.6308.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0317 12:12:39.138968    9924 type.go:168] "Request Body" body=""
	I0317 12:12:39.138968    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:39.138968    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:39.138968    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:39.138968    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:39.142991    9924 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 12:12:39.142991    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:39.142991    9924 round_trippers.go:587]     Audit-Id: 600f93d6-d9af-4016-885a-1ba795b0a99c
	I0317 12:12:39.142991    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:39.142991    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:39.142991    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:39.142991    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:39.142991    9924 round_trippers.go:587]     Content-Length: 3090
	I0317 12:12:39.142991    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:39 GMT
	I0317 12:12:39.142991    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 33 30 38 00 42  |1cdec8612.6308.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0317 12:12:39.638306    9924 type.go:168] "Request Body" body=""
	I0317 12:12:39.638306    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:39.638306    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:39.638306    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:39.638306    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:39.643441    9924 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 12:12:39.643548    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:39.643548    9924 round_trippers.go:587]     Content-Length: 3090
	I0317 12:12:39.643548    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:39 GMT
	I0317 12:12:39.643548    9924 round_trippers.go:587]     Audit-Id: ed070963-f4b2-4130-bd8d-08a063d589c0
	I0317 12:12:39.643612    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:39.643612    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:39.643612    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:39.643612    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:39.643740    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 33 30 38 00 42  |1cdec8612.6308.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0317 12:12:39.643873    9924 node_ready.go:53] node "multinode-781100-m02" has status "Ready":"False"
	I0317 12:12:40.138918    9924 type.go:168] "Request Body" body=""
	I0317 12:12:40.138918    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:40.138918    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:40.138918    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:40.138918    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:40.142875    9924 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0317 12:12:40.142973    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:40.142973    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:40 GMT
	I0317 12:12:40.142973    9924 round_trippers.go:587]     Audit-Id: 77aa9414-a500-4023-8378-a2685592006e
	I0317 12:12:40.143071    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:40.143071    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:40.143071    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:40.143071    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:40.143071    9924 round_trippers.go:587]     Content-Length: 3090
	I0317 12:12:40.143071    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 33 30 38 00 42  |1cdec8612.6308.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0317 12:12:40.638628    9924 type.go:168] "Request Body" body=""
	I0317 12:12:40.638628    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:40.638628    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:40.638628    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:40.638628    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:40.643408    9924 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 12:12:40.643408    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:40.643408    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:40.643408    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:40.643408    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:40.643408    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:40.643408    9924 round_trippers.go:587]     Content-Length: 3090
	I0317 12:12:40.643408    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:40 GMT
	I0317 12:12:40.643408    9924 round_trippers.go:587]     Audit-Id: fb395faa-7bff-4f90-b950-9a50ad525c4b
	I0317 12:12:40.643635    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 33 30 38 00 42  |1cdec8612.6308.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0317 12:12:41.138579    9924 type.go:168] "Request Body" body=""
	I0317 12:12:41.139185    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:41.139185    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:41.139239    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:41.139239    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:41.142229    9924 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0317 12:12:41.142229    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:41.142229    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:41.142229    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:41.142229    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:41.142229    9924 round_trippers.go:587]     Content-Length: 3090
	I0317 12:12:41.142229    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:41 GMT
	I0317 12:12:41.142229    9924 round_trippers.go:587]     Audit-Id: 3899daaf-6d37-4234-bfd4-f504f3a52a81
	I0317 12:12:41.142229    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:41.142229    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 33 30 38 00 42  |1cdec8612.6308.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0317 12:12:41.638677    9924 type.go:168] "Request Body" body=""
	I0317 12:12:41.638677    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:41.638677    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:41.638677    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:41.638677    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:41.784717    9924 round_trippers.go:581] Response Status: 200 OK in 146 milliseconds
	I0317 12:12:41.784801    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:41.784801    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:41.784801    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:41.784862    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:41.784862    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:41.784862    9924 round_trippers.go:587]     Content-Length: 3090
	I0317 12:12:41.784862    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:41 GMT
	I0317 12:12:41.784862    9924 round_trippers.go:587]     Audit-Id: f3efe973-cefd-45ef-8556-fc9fc3ba0712
	I0317 12:12:41.785147    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 33 30 38 00 42  |1cdec8612.6308.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0317 12:12:41.785314    9924 node_ready.go:53] node "multinode-781100-m02" has status "Ready":"False"
	I0317 12:12:42.138774    9924 type.go:168] "Request Body" body=""
	I0317 12:12:42.138861    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:42.138861    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:42.138861    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:42.138861    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:42.147463    9924 round_trippers.go:581] Response Status: 200 OK in 8 milliseconds
	I0317 12:12:42.147463    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:42.147463    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:42.147463    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:42.147463    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:42.147463    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:42.147463    9924 round_trippers.go:587]     Content-Length: 3090
	I0317 12:12:42.147463    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:42 GMT
	I0317 12:12:42.147463    9924 round_trippers.go:587]     Audit-Id: 8cdffa81-da3d-4e87-95b1-bc01b88e4480
	I0317 12:12:42.147463    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 33 30 38 00 42  |1cdec8612.6308.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0317 12:12:42.638683    9924 type.go:168] "Request Body" body=""
	I0317 12:12:42.638683    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:42.638683    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:42.638683    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:42.638683    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:42.643430    9924 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 12:12:42.643430    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:42.643491    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:42.643491    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:42.643491    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:42.643522    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:42.643522    9924 round_trippers.go:587]     Content-Length: 3090
	I0317 12:12:42.643621    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:42 GMT
	I0317 12:12:42.643621    9924 round_trippers.go:587]     Audit-Id: dd51309f-8632-40de-8552-719e6ed0d73b
	I0317 12:12:42.643655    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 33 30 38 00 42  |1cdec8612.6308.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0317 12:12:43.138678    9924 type.go:168] "Request Body" body=""
	I0317 12:12:43.138678    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:43.138678    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:43.138678    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:43.138678    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:43.144020    9924 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 12:12:43.144093    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:43.144093    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:43 GMT
	I0317 12:12:43.144093    9924 round_trippers.go:587]     Audit-Id: d75b4d5f-3eef-4690-a906-569f3a232799
	I0317 12:12:43.144093    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:43.144093    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:43.144093    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:43.144093    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:43.144093    9924 round_trippers.go:587]     Content-Length: 3090
	I0317 12:12:43.144167    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 33 30 38 00 42  |1cdec8612.6308.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0317 12:12:43.638258    9924 type.go:168] "Request Body" body=""
	I0317 12:12:43.638258    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:43.638258    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:43.638258    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:43.638258    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:43.641277    9924 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0317 12:12:43.641900    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:43.641983    9924 round_trippers.go:587]     Audit-Id: 30742c36-0514-4810-8b26-6289b7f5d290
	I0317 12:12:43.641983    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:43.641983    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:43.641983    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:43.641983    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:43.641983    9924 round_trippers.go:587]     Content-Length: 3090
	I0317 12:12:43.641983    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:43 GMT
	I0317 12:12:43.642277    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 33 30 38 00 42  |1cdec8612.6308.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0317 12:12:44.138795    9924 type.go:168] "Request Body" body=""
	I0317 12:12:44.138795    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:44.138795    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:44.138795    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:44.138795    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:44.144385    9924 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 12:12:44.144518    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:44.144518    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:44.144549    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:44.144549    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:44.144549    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:44.144576    9924 round_trippers.go:587]     Content-Length: 3090
	I0317 12:12:44.144576    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:44 GMT
	I0317 12:12:44.144599    9924 round_trippers.go:587]     Audit-Id: 88176edb-95d8-47aa-b53d-59b51bd8872c
	I0317 12:12:44.144599    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 33 30 38 00 42  |1cdec8612.6308.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0317 12:12:44.144599    9924 node_ready.go:53] node "multinode-781100-m02" has status "Ready":"False"
	I0317 12:12:44.638449    9924 type.go:168] "Request Body" body=""
	I0317 12:12:44.638449    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:44.638449    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:44.638449    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:44.638449    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:44.642795    9924 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 12:12:44.642855    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:44.642855    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:44.642855    9924 round_trippers.go:587]     Content-Length: 3090
	I0317 12:12:44.642855    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:44 GMT
	I0317 12:12:44.642855    9924 round_trippers.go:587]     Audit-Id: 60bafa11-4e40-484d-8549-ffc4b64b57c6
	I0317 12:12:44.642855    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:44.642855    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:44.642855    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:44.643173    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 33 30 38 00 42  |1cdec8612.6308.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0317 12:12:45.138224    9924 type.go:168] "Request Body" body=""
	I0317 12:12:45.138766    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:45.138766    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:45.138853    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:45.138885    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:45.143293    9924 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 12:12:45.143293    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:45.143293    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:45.143293    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:45.143293    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:45.143293    9924 round_trippers.go:587]     Content-Length: 3090
	I0317 12:12:45.143293    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:45 GMT
	I0317 12:12:45.143293    9924 round_trippers.go:587]     Audit-Id: 078e6363-b3ef-42b3-9293-b77f60d57b94
	I0317 12:12:45.143293    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:45.144165    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 33 30 38 00 42  |1cdec8612.6308.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0317 12:12:45.638126    9924 type.go:168] "Request Body" body=""
	I0317 12:12:45.638126    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:45.638126    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:45.638126    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:45.638126    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:45.642943    9924 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 12:12:45.642943    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:45.642943    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:45.642943    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:45.643072    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:45.643072    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:45.643072    9924 round_trippers.go:587]     Content-Length: 3090
	I0317 12:12:45.643072    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:45 GMT
	I0317 12:12:45.643072    9924 round_trippers.go:587]     Audit-Id: 91d5f305-f50f-492c-8643-c0cdefb30737
	I0317 12:12:45.643237    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 33 30 38 00 42  |1cdec8612.6308.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0317 12:12:46.138900    9924 type.go:168] "Request Body" body=""
	I0317 12:12:46.138900    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:46.138900    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:46.138900    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:46.138900    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:46.143337    9924 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 12:12:46.143471    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:46.143570    9924 round_trippers.go:587]     Audit-Id: ca0d9bb8-c3db-4983-b2ad-9761eae49779
	I0317 12:12:46.143649    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:46.143649    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:46.143649    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:46.143674    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:46.143674    9924 round_trippers.go:587]     Content-Length: 3090
	I0317 12:12:46.143674    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:46 GMT
	I0317 12:12:46.143674    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 33 30 38 00 42  |1cdec8612.6308.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0317 12:12:46.638002    9924 type.go:168] "Request Body" body=""
	I0317 12:12:46.638394    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:46.638394    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:46.638394    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:46.638394    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:46.643171    9924 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 12:12:46.643171    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:46.643171    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:46.643171    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:46.643171    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:46.643171    9924 round_trippers.go:587]     Content-Length: 3090
	I0317 12:12:46.643171    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:46 GMT
	I0317 12:12:46.643171    9924 round_trippers.go:587]     Audit-Id: cba135c7-775c-4ed7-bf4e-4c83282aa640
	I0317 12:12:46.643171    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:46.643171    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 33 30 38 00 42  |1cdec8612.6308.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0317 12:12:46.643759    9924 node_ready.go:53] node "multinode-781100-m02" has status "Ready":"False"
	I0317 12:12:47.138123    9924 type.go:168] "Request Body" body=""
	I0317 12:12:47.138750    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:47.138750    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:47.138750    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:47.138750    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:47.144231    9924 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 12:12:47.144231    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:47.144304    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:47.144304    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:47.144329    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:47.144329    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:47.144329    9924 round_trippers.go:587]     Content-Length: 3090
	I0317 12:12:47.144359    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:47 GMT
	I0317 12:12:47.144359    9924 round_trippers.go:587]     Audit-Id: 060f1c5d-134c-46df-9929-4874c3e20d3b
	I0317 12:12:47.144447    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 33 30 38 00 42  |1cdec8612.6308.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0317 12:12:47.638973    9924 type.go:168] "Request Body" body=""
	I0317 12:12:47.638973    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:47.638973    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:47.638973    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:47.638973    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:47.644112    9924 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 12:12:47.644208    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:47.644208    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:47.644208    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:47.644208    9924 round_trippers.go:587]     Content-Length: 3090
	I0317 12:12:47.644208    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:47 GMT
	I0317 12:12:47.644208    9924 round_trippers.go:587]     Audit-Id: 8d3eebb7-7657-4f01-9e4a-927cfaafddca
	I0317 12:12:47.644208    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:47.644208    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:47.644588    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 33 30 38 00 42  |1cdec8612.6308.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0317 12:12:48.138339    9924 type.go:168] "Request Body" body=""
	I0317 12:12:48.138848    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:48.138848    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:48.138848    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:48.138848    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:48.144961    9924 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 12:12:48.144961    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:48.144961    9924 round_trippers.go:587]     Audit-Id: 033792aa-48bf-4827-a8c2-daf9c921da47
	I0317 12:12:48.144961    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:48.144961    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:48.144961    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:48.144961    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:48.144961    9924 round_trippers.go:587]     Content-Length: 3090
	I0317 12:12:48.144961    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:48 GMT
	I0317 12:12:48.144961    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 33 30 38 00 42  |1cdec8612.6308.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0317 12:12:48.638881    9924 type.go:168] "Request Body" body=""
	I0317 12:12:48.638881    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:48.638881    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:48.638881    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:48.638881    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:48.644885    9924 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 12:12:48.644885    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:48.644885    9924 round_trippers.go:587]     Audit-Id: 86bfdd90-293a-4150-a2c0-88f1704dde0b
	I0317 12:12:48.645036    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:48.645036    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:48.645036    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:48.645036    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:48.645074    9924 round_trippers.go:587]     Content-Length: 3090
	I0317 12:12:48.645074    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:48 GMT
	I0317 12:12:48.645191    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 33 30 38 00 42  |1cdec8612.6308.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0317 12:12:48.645449    9924 node_ready.go:53] node "multinode-781100-m02" has status "Ready":"False"
	I0317 12:12:49.139193    9924 type.go:168] "Request Body" body=""
	I0317 12:12:49.139268    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:49.139376    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:49.139376    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:49.139376    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:49.143316    9924 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0317 12:12:49.143316    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:49.143370    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:49.143370    9924 round_trippers.go:587]     Content-Length: 3090
	I0317 12:12:49.143370    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:49 GMT
	I0317 12:12:49.143370    9924 round_trippers.go:587]     Audit-Id: d882f7b7-92a7-42f3-9943-2a97b2b88ba8
	I0317 12:12:49.143370    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:49.143370    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:49.143370    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:49.143370    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 33 30 38 00 42  |1cdec8612.6308.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0317 12:12:49.638652    9924 type.go:168] "Request Body" body=""
	I0317 12:12:49.638652    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:49.638652    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:49.638652    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:49.638652    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:49.642666    9924 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 12:12:49.642666    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:49.642666    9924 round_trippers.go:587]     Audit-Id: 5082e767-737c-45ee-bb18-5a918ade92b1
	I0317 12:12:49.642666    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:49.642666    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:49.642666    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:49.642666    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:49.642666    9924 round_trippers.go:587]     Content-Length: 3090
	I0317 12:12:49.642666    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:49 GMT
	I0317 12:12:49.643656    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 33 30 38 00 42  |1cdec8612.6308.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0317 12:12:50.139761    9924 type.go:168] "Request Body" body=""
	I0317 12:12:50.139862    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:50.139862    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:50.139862    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:50.139862    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:50.144546    9924 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 12:12:50.144546    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:50.144546    9924 round_trippers.go:587]     Content-Length: 3090
	I0317 12:12:50.144546    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:50 GMT
	I0317 12:12:50.144546    9924 round_trippers.go:587]     Audit-Id: ca908d91-e60f-40c0-b0b3-84bb113c41f4
	I0317 12:12:50.144546    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:50.144546    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:50.144546    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:50.144546    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:50.145103    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 33 30 38 00 42  |1cdec8612.6308.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0317 12:12:50.638330    9924 type.go:168] "Request Body" body=""
	I0317 12:12:50.638852    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:50.638852    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:50.638852    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:50.638852    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:50.642312    9924 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0317 12:12:50.642373    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:50.642373    9924 round_trippers.go:587]     Audit-Id: 1595e105-fdf4-40b7-ad0c-373b07ba1184
	I0317 12:12:50.642373    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:50.642373    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:50.642373    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:50.642373    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:50.642373    9924 round_trippers.go:587]     Content-Length: 3090
	I0317 12:12:50.642373    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:50 GMT
	I0317 12:12:50.642739    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 33 30 38 00 42  |1cdec8612.6308.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0317 12:12:51.138096    9924 type.go:168] "Request Body" body=""
	I0317 12:12:51.138096    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:51.138096    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:51.138096    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:51.138096    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:51.143114    9924 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 12:12:51.143114    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:51.143114    9924 round_trippers.go:587]     Content-Length: 3090
	I0317 12:12:51.143114    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:51 GMT
	I0317 12:12:51.143243    9924 round_trippers.go:587]     Audit-Id: 049d72a3-b38b-4669-9e2a-995ea29cfca9
	I0317 12:12:51.143243    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:51.143243    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:51.143243    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:51.143243    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:51.143554    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 33 30 38 00 42  |1cdec8612.6308.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0317 12:12:51.143852    9924 node_ready.go:53] node "multinode-781100-m02" has status "Ready":"False"
	I0317 12:12:51.638090    9924 type.go:168] "Request Body" body=""
	I0317 12:12:51.638090    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:51.638090    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:51.638090    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:51.638090    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:51.643084    9924 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 12:12:51.643131    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:51.643131    9924 round_trippers.go:587]     Content-Length: 3090
	I0317 12:12:51.643131    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:51 GMT
	I0317 12:12:51.643131    9924 round_trippers.go:587]     Audit-Id: c5bcf928-2716-42ba-8ed8-333220420895
	I0317 12:12:51.643131    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:51.643131    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:51.643131    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:51.643131    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:51.643131    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 33 30 38 00 42  |1cdec8612.6308.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0317 12:12:52.138515    9924 type.go:168] "Request Body" body=""
	I0317 12:12:52.138515    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:52.138515    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:52.138515    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:52.138515    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:52.143166    9924 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 12:12:52.143239    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:52.143239    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:52.143239    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:52.143239    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:52.143239    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:52.143239    9924 round_trippers.go:587]     Content-Length: 3090
	I0317 12:12:52.143239    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:52 GMT
	I0317 12:12:52.143239    9924 round_trippers.go:587]     Audit-Id: 5d8752dd-36de-459c-8e5d-f24edae531cd
	I0317 12:12:52.143526    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 33 30 38 00 42  |1cdec8612.6308.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0317 12:12:52.639245    9924 type.go:168] "Request Body" body=""
	I0317 12:12:52.639418    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:52.639473    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:52.639473    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:52.639473    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:52.643722    9924 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 12:12:52.643897    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:52.643897    9924 round_trippers.go:587]     Audit-Id: c885f256-2070-4327-8f1d-78b6d9996b0c
	I0317 12:12:52.643897    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:52.643897    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:52.643897    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:52.643897    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:52.643951    9924 round_trippers.go:587]     Content-Length: 3090
	I0317 12:12:52.644024    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:52 GMT
	I0317 12:12:52.644316    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 33 30 38 00 42  |1cdec8612.6308.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0317 12:12:53.138036    9924 type.go:168] "Request Body" body=""
	I0317 12:12:53.138481    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:53.138481    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:53.138601    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:53.138601    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:53.141578    9924 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0317 12:12:53.142201    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:53.142201    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:53.142201    9924 round_trippers.go:587]     Content-Length: 3090
	I0317 12:12:53.142201    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:53 GMT
	I0317 12:12:53.142201    9924 round_trippers.go:587]     Audit-Id: 87e1ee1c-64cd-4b75-976e-3106cdabc6e0
	I0317 12:12:53.142201    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:53.142201    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:53.142201    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:53.142389    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 33 30 38 00 42  |1cdec8612.6308.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0317 12:12:53.638999    9924 type.go:168] "Request Body" body=""
	I0317 12:12:53.639118    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:53.639118    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:53.639118    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:53.639118    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:53.643125    9924 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 12:12:53.643125    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:53.643125    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:53.643125    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:53.643125    9924 round_trippers.go:587]     Content-Length: 3090
	I0317 12:12:53.643125    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:53 GMT
	I0317 12:12:53.643125    9924 round_trippers.go:587]     Audit-Id: 02870631-844c-473d-9f96-6b96345dde55
	I0317 12:12:53.643125    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:53.643125    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:53.643125    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 33 30 38 00 42  |1cdec8612.6308.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0317 12:12:53.643756    9924 node_ready.go:53] node "multinode-781100-m02" has status "Ready":"False"
	I0317 12:12:54.138757    9924 type.go:168] "Request Body" body=""
	I0317 12:12:54.139448    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:54.139448    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:54.139448    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:54.139448    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:54.146303    9924 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 12:12:54.146303    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:54.146303    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:54 GMT
	I0317 12:12:54.146303    9924 round_trippers.go:587]     Audit-Id: acc3db23-64ea-4032-965d-b32a187a0c26
	I0317 12:12:54.146303    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:54.146495    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:54.146495    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:54.146495    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:54.146495    9924 round_trippers.go:587]     Content-Length: 3090
	I0317 12:12:54.146809    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 33 30 38 00 42  |1cdec8612.6308.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0317 12:12:54.638609    9924 type.go:168] "Request Body" body=""
	I0317 12:12:54.638681    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:54.638766    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:54.638766    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:54.638766    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:54.644787    9924 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 12:12:54.644787    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:54.644787    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:54.644787    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:54.644787    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:54.644787    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:54.644787    9924 round_trippers.go:587]     Content-Length: 3090
	I0317 12:12:54.644787    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:54 GMT
	I0317 12:12:54.644787    9924 round_trippers.go:587]     Audit-Id: 4ad02768-d560-4832-97a4-dd7546485ff9
	I0317 12:12:54.644787    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 33 30 38 00 42  |1cdec8612.6308.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0317 12:12:55.139848    9924 type.go:168] "Request Body" body=""
	I0317 12:12:55.139924    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:55.139924    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:55.139924    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:55.139924    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:55.142853    9924 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0317 12:12:55.143705    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:55.143705    9924 round_trippers.go:587]     Audit-Id: b715d47a-402e-4063-9f82-d596f6999a6d
	I0317 12:12:55.143705    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:55.143705    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:55.143705    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:55.143705    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:55.143705    9924 round_trippers.go:587]     Content-Length: 3090
	I0317 12:12:55.143705    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:55 GMT
	I0317 12:12:55.143952    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 33 30 38 00 42  |1cdec8612.6308.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0317 12:12:55.638919    9924 type.go:168] "Request Body" body=""
	I0317 12:12:55.639200    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:55.639200    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:55.639200    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:55.639200    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:55.644083    9924 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 12:12:55.644137    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:55.644137    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:55.644137    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:55.644137    9924 round_trippers.go:587]     Content-Length: 3090
	I0317 12:12:55.644137    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:55 GMT
	I0317 12:12:55.644137    9924 round_trippers.go:587]     Audit-Id: dfca36c4-f616-4197-91a5-87fb04c19c5f
	I0317 12:12:55.644229    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:55.644229    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:55.644384    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 33 30 38 00 42  |1cdec8612.6308.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0317 12:12:55.644384    9924 node_ready.go:53] node "multinode-781100-m02" has status "Ready":"False"
	I0317 12:12:56.139084    9924 type.go:168] "Request Body" body=""
	I0317 12:12:56.139084    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:56.139084    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:56.139084    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:56.139084    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:56.143127    9924 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 12:12:56.143127    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:56.143127    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:56.143127    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:56.143127    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:56.143127    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:56.143127    9924 round_trippers.go:587]     Content-Length: 3512
	I0317 12:12:56.143127    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:56 GMT
	I0317 12:12:56.143127    9924 round_trippers.go:587]     Audit-Id: f72e619a-7696-4d29-bb8b-f78d067fc77a
	I0317 12:12:56.143127    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 a1 1b 0a 87 0f 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 36 30 38 00 42  |1cdec8612.6608.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 16348 chars]
	 >
	I0317 12:12:56.639047    9924 type.go:168] "Request Body" body=""
	I0317 12:12:56.639289    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:56.639289    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:56.639289    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:56.639289    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:56.643184    9924 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0317 12:12:56.643671    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:56.643671    9924 round_trippers.go:587]     Content-Length: 3512
	I0317 12:12:56.643671    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:56 GMT
	I0317 12:12:56.643671    9924 round_trippers.go:587]     Audit-Id: 5eec968b-bce4-4e42-aa38-c44ad9b6958c
	I0317 12:12:56.643671    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:56.643671    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:56.643671    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:56.643671    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:56.643998    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 a1 1b 0a 87 0f 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 36 30 38 00 42  |1cdec8612.6608.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 16348 chars]
	 >
	I0317 12:12:57.139004    9924 type.go:168] "Request Body" body=""
	I0317 12:12:57.139190    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:57.139190    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:57.139190    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:57.139190    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:57.146269    9924 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0317 12:12:57.146406    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:57.146406    9924 round_trippers.go:587]     Audit-Id: 020b59fb-1a7b-4a60-93da-f0c4aa8f9616
	I0317 12:12:57.146406    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:57.146406    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:57.146406    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:57.146496    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:57.146496    9924 round_trippers.go:587]     Content-Length: 3512
	I0317 12:12:57.146496    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:57 GMT
	I0317 12:12:57.146779    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 a1 1b 0a 87 0f 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 36 30 38 00 42  |1cdec8612.6608.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 16348 chars]
	 >
	I0317 12:12:57.638777    9924 type.go:168] "Request Body" body=""
	I0317 12:12:57.638777    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:57.638777    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:57.638777    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:57.638777    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:57.643946    9924 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 12:12:57.643946    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:57.644076    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:57.644076    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:57.644076    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:57.644076    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:57.644076    9924 round_trippers.go:587]     Content-Length: 3512
	I0317 12:12:57.644076    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:57 GMT
	I0317 12:12:57.644076    9924 round_trippers.go:587]     Audit-Id: c59265b4-6c50-4144-8592-b283018fa197
	I0317 12:12:57.644251    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 a1 1b 0a 87 0f 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 36 30 38 00 42  |1cdec8612.6608.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 16348 chars]
	 >
	I0317 12:12:58.138679    9924 type.go:168] "Request Body" body=""
	I0317 12:12:58.138679    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:58.138679    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:58.138679    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:58.138679    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:58.143270    9924 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 12:12:58.143458    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:58.143458    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:58.143458    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:58.143458    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:58.143458    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:58.143458    9924 round_trippers.go:587]     Content-Length: 3512
	I0317 12:12:58.143458    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:58 GMT
	I0317 12:12:58.143522    9924 round_trippers.go:587]     Audit-Id: 0408e462-0ed5-420e-9bc0-62559c40b6b2
	I0317 12:12:58.143613    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 a1 1b 0a 87 0f 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 36 30 38 00 42  |1cdec8612.6608.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 16348 chars]
	 >
	I0317 12:12:58.143613    9924 node_ready.go:53] node "multinode-781100-m02" has status "Ready":"False"
	I0317 12:12:58.638442    9924 type.go:168] "Request Body" body=""
	I0317 12:12:58.639055    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:58.639055    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:58.639055    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:58.639055    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:58.643129    9924 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0317 12:12:58.643129    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:58.643129    9924 round_trippers.go:587]     Content-Length: 3512
	I0317 12:12:58.643198    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:58 GMT
	I0317 12:12:58.643198    9924 round_trippers.go:587]     Audit-Id: 25b62969-4faf-4201-bd89-abfa551de6ef
	I0317 12:12:58.643198    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:58.643198    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:58.643198    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:58.643198    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:58.643484    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 a1 1b 0a 87 0f 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 36 30 38 00 42  |1cdec8612.6608.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 16348 chars]
	 >
	I0317 12:12:59.138783    9924 type.go:168] "Request Body" body=""
	I0317 12:12:59.138783    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:59.138783    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:59.138783    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:59.138783    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:59.144586    9924 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 12:12:59.144664    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:59.144664    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:59 GMT
	I0317 12:12:59.144664    9924 round_trippers.go:587]     Audit-Id: 41a78174-6e49-4614-aa03-7e2014a25b57
	I0317 12:12:59.144664    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:59.144664    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:59.144664    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:59.144664    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:59.144664    9924 round_trippers.go:587]     Content-Length: 3390
	I0317 12:12:59.144664    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 a7 1a 0a bd 0f 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 36 36 38 00 42  |1cdec8612.6668.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 15722 chars]
	 >
	I0317 12:12:59.145213    9924 node_ready.go:49] node "multinode-781100-m02" has status "Ready":"True"
	I0317 12:12:59.145213    9924 node_ready.go:38] duration metric: took 32.5072246s for node "multinode-781100-m02" to be "Ready" ...
	I0317 12:12:59.145213    9924 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 12:12:59.145439    9924 type.go:204] "Request Body" body=""
	I0317 12:12:59.145439    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/namespaces/kube-system/pods
	I0317 12:12:59.145439    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:59.145439    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:59.145439    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:59.149965    9924 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 12:12:59.149965    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:59.149965    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:59.149965    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:59.149965    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:59.150131    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:59 GMT
	I0317 12:12:59.150131    9924 round_trippers.go:587]     Audit-Id: 069daff2-7fad-4433-afb7-4f14685591c1
	I0317 12:12:59.150131    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:59.154441    9924 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0d 0a 02  76 31 12 07 50 6f 64 4c  |k8s.....v1..PodL|
		00000010  69 73 74 12 c3 93 03 0a  09 0a 00 12 03 36 36 36  |ist..........666|
		00000020  1a 00 12 d0 27 0a ae 19  0a 18 63 6f 72 65 64 6e  |....'.....coredn|
		00000030  73 2d 36 36 38 64 36 62  66 39 62 63 2d 62 38 34  |s-668d6bf9bc-b84|
		00000040  34 35 12 13 63 6f 72 65  64 6e 73 2d 36 36 38 64  |45..coredns-668d|
		00000050  36 62 66 39 62 63 2d 1a  0b 6b 75 62 65 2d 73 79  |6bf9bc-..kube-sy|
		00000060  73 74 65 6d 22 00 2a 24  31 65 66 61 30 64 62 30  |stem".*$1efa0db0|
		00000070  2d 31 33 36 61 2d 34 34  30 35 2d 38 35 65 31 2d  |-136a-4405-85e1-|
		00000080  34 64 32 61 62 63 38 39  62 36 61 31 32 03 34 34  |4d2abc89b6a12.44|
		00000090  38 38 00 42 08 08 ea a1  e0 be 06 10 00 5a 13 0a  |88.B.........Z..|
		000000a0  07 6b 38 73 2d 61 70 70  12 08 6b 75 62 65 2d 64  |.k8s-app..kube-d|
		000000b0  6e 73 5a 1f 0a 11 70 6f  64 2d 74 65 6d 70 6c 61  |nsZ...pod-templa|
		000000c0  74 65 2d 68 61 73 68 12  0a 36 36 38 64 36 62 66  |te-hash..668d6b [truncated 254144 chars]
	 >
	I0317 12:12:59.155107    9924 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-b8445" in "kube-system" namespace to be "Ready" ...
	I0317 12:12:59.155107    9924 type.go:168] "Request Body" body=""
	I0317 12:12:59.155107    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-b8445
	I0317 12:12:59.155107    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:59.155107    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:59.155107    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:59.158119    9924 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0317 12:12:59.158254    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:59.158254    9924 round_trippers.go:587]     Audit-Id: b15d3791-d25a-4880-9393-429c101b3244
	I0317 12:12:59.158254    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:59.158254    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:59.158254    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:59.158254    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:59.158254    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:59 GMT
	I0317 12:12:59.158727    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  d0 27 0a ae 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.'.....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 62 38 34 34 35 12  |68d6bf9bc-b8445.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 31 65 66  61 30 64 62 30 2d 31 33  |m".*$1efa0db0-13|
		00000060  36 61 2d 34 34 30 35 2d  38 35 65 31 2d 34 64 32  |6a-4405-85e1-4d2|
		00000070  61 62 63 38 39 62 36 61  31 32 03 34 34 38 38 00  |abc89b6a12.4488.|
		00000080  42 08 08 ea a1 e0 be 06  10 00 5a 13 0a 07 6b 38  |B.........Z...k8|
		00000090  73 2d 61 70 70 12 08 6b  75 62 65 2d 64 6e 73 5a  |s-app..kube-dnsZ|
		000000a0  1f 0a 11 70 6f 64 2d 74  65 6d 70 6c 61 74 65 2d  |...pod-template-|
		000000b0  68 61 73 68 12 0a 36 36  38 64 36 62 66 39 62 63  |hash..668d6bf9bc|
		000000c0  6a 53 0a 0a 52 65 70 6c  69 63 61 53 65 74 1a 12  |jS..ReplicaSet. [truncated 24167 chars]
	 >
	I0317 12:12:59.158891    9924 type.go:168] "Request Body" body=""
	I0317 12:12:59.158977    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100
	I0317 12:12:59.158977    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:59.158977    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:59.158977    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:59.165174    9924 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 12:12:59.165743    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:59.165743    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:59.165743    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:59.165743    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:59.165743    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:59.165743    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:59 GMT
	I0317 12:12:59.165743    9924 round_trippers.go:587]     Audit-Id: 1c21732b-b5f5-489b-8cc8-1be32764d9ce
	I0317 12:12:59.166554    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d7 22 0a 8a 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 37 38 31 31 30 30  12 00 1a 00 22 00 2a 24  |e-781100....".*$|
		00000030  61 61 65 38 30 62 63 35  2d 34 33 30 37 2d 34 31  |aae80bc5-4307-41|
		00000040  31 37 2d 39 37 35 37 2d  35 32 62 61 38 31 65 64  |17-9757-52ba81ed|
		00000050  65 37 33 39 32 03 34 35  34 38 00 42 08 08 e2 a1  |e7392.4548.B....|
		00000060  e0 be 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 21015 chars]
	 >
	I0317 12:12:59.166762    9924 pod_ready.go:93] pod "coredns-668d6bf9bc-b8445" in "kube-system" namespace has status "Ready":"True"
	I0317 12:12:59.166807    9924 pod_ready.go:82] duration metric: took 11.6996ms for pod "coredns-668d6bf9bc-b8445" in "kube-system" namespace to be "Ready" ...
	I0317 12:12:59.166807    9924 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-781100" in "kube-system" namespace to be "Ready" ...
	I0317 12:12:59.166837    9924 type.go:168] "Request Body" body=""
	I0317 12:12:59.166837    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-781100
	I0317 12:12:59.166837    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:59.166837    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:59.166837    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:59.170743    9924 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0317 12:12:59.170799    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:59.170867    9924 round_trippers.go:587]     Audit-Id: 3f1a9df3-dbd0-4eee-b127-e5fac415d334
	I0317 12:12:59.170867    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:59.170918    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:59.170965    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:59.170965    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:59.170965    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:59 GMT
	I0317 12:12:59.171744    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  a0 2b 0a 9c 1a 0a 15 65  74 63 64 2d 6d 75 6c 74  |.+.....etcd-mult|
		00000020  69 6e 6f 64 65 2d 37 38  31 31 30 30 12 00 1a 0b  |inode-781100....|
		00000030  6b 75 62 65 2d 73 79 73  74 65 6d 22 00 2a 24 63  |kube-system".*$c|
		00000040  32 30 63 39 31 61 33 2d  65 62 34 66 2d 34 36 62  |20c91a3-eb4f-46b|
		00000050  66 2d 38 38 61 38 2d 32  65 62 62 66 62 38 61 64  |f-88a8-2ebbfb8ad|
		00000060  35 33 64 32 03 33 39 31  38 00 42 08 08 e5 a1 e0  |53d2.3918.B.....|
		00000070  be 06 10 00 5a 11 0a 09  63 6f 6d 70 6f 6e 65 6e  |....Z...componen|
		00000080  74 12 04 65 74 63 64 5a  15 0a 04 74 69 65 72 12  |t..etcdZ...tier.|
		00000090  0d 63 6f 6e 74 72 6f 6c  2d 70 6c 61 6e 65 62 4e  |.control-planebN|
		000000a0  0a 30 6b 75 62 65 61 64  6d 2e 6b 75 62 65 72 6e  |.0kubeadm.kubern|
		000000b0  65 74 65 73 2e 69 6f 2f  65 74 63 64 2e 61 64 76  |etes.io/etcd.adv|
		000000c0  65 72 74 69 73 65 2d 63  6c 69 65 6e 74 2d 75 72  |ertise-client-u [truncated 26458 chars]
	 >
	I0317 12:12:59.171744    9924 type.go:168] "Request Body" body=""
	I0317 12:12:59.171744    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100
	I0317 12:12:59.171744    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:59.171744    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:59.171744    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:59.174351    9924 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0317 12:12:59.174413    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:59.174413    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:59.174413    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:59.174413    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:59 GMT
	I0317 12:12:59.174496    9924 round_trippers.go:587]     Audit-Id: 742f6841-a856-49eb-b3df-301c4f0a4e8b
	I0317 12:12:59.174496    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:59.174496    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:59.174999    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d7 22 0a 8a 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 37 38 31 31 30 30  12 00 1a 00 22 00 2a 24  |e-781100....".*$|
		00000030  61 61 65 38 30 62 63 35  2d 34 33 30 37 2d 34 31  |aae80bc5-4307-41|
		00000040  31 37 2d 39 37 35 37 2d  35 32 62 61 38 31 65 64  |17-9757-52ba81ed|
		00000050  65 37 33 39 32 03 34 35  34 38 00 42 08 08 e2 a1  |e7392.4548.B....|
		00000060  e0 be 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 21015 chars]
	 >
	I0317 12:12:59.175217    9924 pod_ready.go:93] pod "etcd-multinode-781100" in "kube-system" namespace has status "Ready":"True"
	I0317 12:12:59.175217    9924 pod_ready.go:82] duration metric: took 8.3796ms for pod "etcd-multinode-781100" in "kube-system" namespace to be "Ready" ...
	I0317 12:12:59.175271    9924 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-781100" in "kube-system" namespace to be "Ready" ...
	I0317 12:12:59.175398    9924 type.go:168] "Request Body" body=""
	I0317 12:12:59.175496    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-781100
	I0317 12:12:59.175544    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:59.175613    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:59.175613    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:59.177977    9924 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0317 12:12:59.177977    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:59.177977    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:59.177977    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:59.177977    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:59 GMT
	I0317 12:12:59.177977    9924 round_trippers.go:587]     Audit-Id: cf49a2d6-d82c-4d5d-8122-0df69553969f
	I0317 12:12:59.177977    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:59.177977    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:59.177977    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  85 34 0a ac 1c 0a 1f 6b  75 62 65 2d 61 70 69 73  |.4.....kube-apis|
		00000020  65 72 76 65 72 2d 6d 75  6c 74 69 6e 6f 64 65 2d  |erver-multinode-|
		00000030  37 38 31 31 30 30 12 00  1a 0b 6b 75 62 65 2d 73  |781100....kube-s|
		00000040  79 73 74 65 6d 22 00 2a  24 33 39 33 35 64 35 64  |ystem".*$3935d5d|
		00000050  31 2d 62 36 38 31 2d 34  39 65 63 2d 39 38 30 31  |1-b681-49ec-9801|
		00000060  2d 66 39 34 30 66 33 34  38 32 30 65 31 32 03 33  |-f940f34820e12.3|
		00000070  38 36 38 00 42 08 08 e5  a1 e0 be 06 10 00 5a 1b  |868.B.........Z.|
		00000080  0a 09 63 6f 6d 70 6f 6e  65 6e 74 12 0e 6b 75 62  |..component..kub|
		00000090  65 2d 61 70 69 73 65 72  76 65 72 5a 15 0a 04 74  |e-apiserverZ...t|
		000000a0  69 65 72 12 0d 63 6f 6e  74 72 6f 6c 2d 70 6c 61  |ier..control-pla|
		000000b0  6e 65 62 55 0a 3f 6b 75  62 65 61 64 6d 2e 6b 75  |nebU.?kubeadm.ku|
		000000c0  62 65 72 6e 65 74 65 73  2e 69 6f 2f 6b 75 62 65  |bernetes.io/kub [truncated 31993 chars]
	 >
	I0317 12:12:59.178675    9924 type.go:168] "Request Body" body=""
	I0317 12:12:59.178731    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100
	I0317 12:12:59.178854    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:59.178854    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:59.178854    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:59.181255    9924 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0317 12:12:59.181255    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:59.181255    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:59.181255    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:59.181255    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:59 GMT
	I0317 12:12:59.181255    9924 round_trippers.go:587]     Audit-Id: 00a45339-a96d-46e1-a42e-1e33c759ce51
	I0317 12:12:59.181255    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:59.182056    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:59.182395    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d7 22 0a 8a 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 37 38 31 31 30 30  12 00 1a 00 22 00 2a 24  |e-781100....".*$|
		00000030  61 61 65 38 30 62 63 35  2d 34 33 30 37 2d 34 31  |aae80bc5-4307-41|
		00000040  31 37 2d 39 37 35 37 2d  35 32 62 61 38 31 65 64  |17-9757-52ba81ed|
		00000050  65 37 33 39 32 03 34 35  34 38 00 42 08 08 e2 a1  |e7392.4548.B....|
		00000060  e0 be 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 21015 chars]
	 >
	I0317 12:12:59.182661    9924 pod_ready.go:93] pod "kube-apiserver-multinode-781100" in "kube-system" namespace has status "Ready":"True"
	I0317 12:12:59.182719    9924 pod_ready.go:82] duration metric: took 7.3725ms for pod "kube-apiserver-multinode-781100" in "kube-system" namespace to be "Ready" ...
	I0317 12:12:59.182719    9924 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-781100" in "kube-system" namespace to be "Ready" ...
	I0317 12:12:59.182719    9924 type.go:168] "Request Body" body=""
	I0317 12:12:59.182815    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-781100
	I0317 12:12:59.182815    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:59.182815    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:59.182815    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:59.185216    9924 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0317 12:12:59.185779    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:59.185779    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:59.185779    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:59.185779    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:59.185779    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:59.185779    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:59 GMT
	I0317 12:12:59.185779    9924 round_trippers.go:587]     Audit-Id: 33bea496-4279-4904-91ef-f05b34656ad9
	I0317 12:12:59.186782    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  eb 30 0a 99 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.0....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 37 38 31 31 30 30 12  |ultinode-781100.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 36 31 38 38 65 64  30 66 2d 61 32 35 32 2d  |*$6188ed0f-a252-|
		00000060  34 61 35 39 2d 39 62 61  34 2d 32 37 62 30 32 33  |4a59-9ba4-27b023|
		00000070  37 34 63 34 63 31 32 03  34 30 36 38 00 42 08 08  |74c4c12.4068.B..|
		00000080  e5 a1 e0 be 06 10 00 5a  24 0a 09 63 6f 6d 70 6f  |.......Z$..compo|
		00000090  6e 65 6e 74 12 17 6b 75  62 65 2d 63 6f 6e 74 72  |nent..kube-contr|
		000000a0  6f 6c 6c 65 72 2d 6d 61  6e 61 67 65 72 5a 15 0a  |oller-managerZ..|
		000000b0  04 74 69 65 72 12 0d 63  6f 6e 74 72 6f 6c 2d 70  |.tier..control-p|
		000000c0  6c 61 6e 65 62 3d 0a 19  6b 75 62 65 72 6e 65 74  |laneb=..kuberne [truncated 30008 chars]
	 >
	I0317 12:12:59.186982    9924 type.go:168] "Request Body" body=""
	I0317 12:12:59.186982    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100
	I0317 12:12:59.187055    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:59.187055    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:59.187055    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:59.189296    9924 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0317 12:12:59.189296    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:59.190154    9924 round_trippers.go:587]     Audit-Id: d654778a-8b9a-47dc-8c7b-67af1ee824d7
	I0317 12:12:59.190154    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:59.190154    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:59.190154    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:59.190154    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:59.190154    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:59 GMT
	I0317 12:12:59.190472    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d7 22 0a 8a 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 37 38 31 31 30 30  12 00 1a 00 22 00 2a 24  |e-781100....".*$|
		00000030  61 61 65 38 30 62 63 35  2d 34 33 30 37 2d 34 31  |aae80bc5-4307-41|
		00000040  31 37 2d 39 37 35 37 2d  35 32 62 61 38 31 65 64  |17-9757-52ba81ed|
		00000050  65 37 33 39 32 03 34 35  34 38 00 42 08 08 e2 a1  |e7392.4548.B....|
		00000060  e0 be 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 21015 chars]
	 >
	I0317 12:12:59.190582    9924 pod_ready.go:93] pod "kube-controller-manager-multinode-781100" in "kube-system" namespace has status "Ready":"True"
	I0317 12:12:59.190649    9924 pod_ready.go:82] duration metric: took 7.93ms for pod "kube-controller-manager-multinode-781100" in "kube-system" namespace to be "Ready" ...
	I0317 12:12:59.190649    9924 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-29tvk" in "kube-system" namespace to be "Ready" ...
	I0317 12:12:59.190737    9924 type.go:168] "Request Body" body=""
	I0317 12:12:59.338880    9924 request.go:661] Waited for 148.142ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-29tvk
	I0317 12:12:59.338880    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-29tvk
	I0317 12:12:59.338880    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:59.338880    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:59.338880    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:59.346555    9924 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0317 12:12:59.346636    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:59.346636    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:59.346636    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:59.346636    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:59.346697    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:59 GMT
	I0317 12:12:59.346719    9924 round_trippers.go:587]     Audit-Id: 92273ae5-9fff-4268-8e54-90dfdd18699a
	I0317 12:12:59.346745    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:59.347461    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  9d 25 0a bf 14 0a 10 6b  75 62 65 2d 70 72 6f 78  |.%.....kube-prox|
		00000020  79 2d 32 39 74 76 6b 12  0b 6b 75 62 65 2d 70 72  |y-29tvk..kube-pr|
		00000030  6f 78 79 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |oxy-..kube-syste|
		00000040  6d 22 00 2a 24 37 65 66  65 36 63 33 32 2d 30 62  |m".*$7efe6c32-0b|
		00000050  39 66 2d 34 64 38 61 2d  38 64 30 38 2d 61 33 39  |9f-4d8a-8d08-a39|
		00000060  39 33 62 36 64 63 35 62  35 32 03 34 30 31 38 00  |93b6dc5b52.4018.|
		00000070  42 08 08 ea a1 e0 be 06  10 00 5a 26 0a 18 63 6f  |B.........Z&..co|
		00000080  6e 74 72 6f 6c 6c 65 72  2d 72 65 76 69 73 69 6f  |ntroller-revisio|
		00000090  6e 2d 68 61 73 68 12 0a  37 62 62 38 34 63 34 39  |n-hash..7bb84c49|
		000000a0  38 34 5a 15 0a 07 6b 38  73 2d 61 70 70 12 0a 6b  |84Z...k8s-app..k|
		000000b0  75 62 65 2d 70 72 6f 78  79 5a 1c 0a 17 70 6f 64  |ube-proxyZ...pod|
		000000c0  2d 74 65 6d 70 6c 61 74  65 2d 67 65 6e 65 72 61  |-template-gener [truncated 22663 chars]
	 >
	I0317 12:12:59.347461    9924 type.go:168] "Request Body" body=""
	I0317 12:12:59.539185    9924 request.go:661] Waited for 191.7218ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.124:8443/api/v1/nodes/multinode-781100
	I0317 12:12:59.539185    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100
	I0317 12:12:59.539185    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:59.539185    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:59.539185    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:59.552686    9924 round_trippers.go:581] Response Status: 200 OK in 13 milliseconds
	I0317 12:12:59.553677    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:59.553707    9924 round_trippers.go:587]     Audit-Id: 5ab33ee0-3923-4e37-a7fa-b9f041b8273f
	I0317 12:12:59.553707    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:59.553707    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:59.553707    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:59.553707    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:59.553707    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:59 GMT
	I0317 12:12:59.553861    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d7 22 0a 8a 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 37 38 31 31 30 30  12 00 1a 00 22 00 2a 24  |e-781100....".*$|
		00000030  61 61 65 38 30 62 63 35  2d 34 33 30 37 2d 34 31  |aae80bc5-4307-41|
		00000040  31 37 2d 39 37 35 37 2d  35 32 62 61 38 31 65 64  |17-9757-52ba81ed|
		00000050  65 37 33 39 32 03 34 35  34 38 00 42 08 08 e2 a1  |e7392.4548.B....|
		00000060  e0 be 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 21015 chars]
	 >
	I0317 12:12:59.553861    9924 pod_ready.go:93] pod "kube-proxy-29tvk" in "kube-system" namespace has status "Ready":"True"
	I0317 12:12:59.553861    9924 pod_ready.go:82] duration metric: took 363.1694ms for pod "kube-proxy-29tvk" in "kube-system" namespace to be "Ready" ...
	I0317 12:12:59.553861    9924 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kc6rf" in "kube-system" namespace to be "Ready" ...
	I0317 12:12:59.553861    9924 type.go:168] "Request Body" body=""
	I0317 12:12:59.738944    9924 request.go:661] Waited for 184.5307ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kc6rf
	I0317 12:12:59.738944    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kc6rf
	I0317 12:12:59.738944    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:59.738944    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:59.738944    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:59.742882    9924 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0317 12:12:59.742882    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:59.742937    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:59.742937    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:59.742937    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:59 GMT
	I0317 12:12:59.742937    9924 round_trippers.go:587]     Audit-Id: ef88aee3-daba-4681-8f3d-5f103f7e703d
	I0317 12:12:59.742937    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:59.742937    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:59.743583    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  a5 25 0a bf 14 0a 10 6b  75 62 65 2d 70 72 6f 78  |.%.....kube-prox|
		00000020  79 2d 6b 63 36 72 66 12  0b 6b 75 62 65 2d 70 72  |y-kc6rf..kube-pr|
		00000030  6f 78 79 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |oxy-..kube-syste|
		00000040  6d 22 00 2a 24 38 35 33  32 64 33 35 34 2d 61 66  |m".*$8532d354-af|
		00000050  35 35 2d 34 34 61 64 2d  62 39 34 64 2d 61 31 36  |55-44ad-b94d-a16|
		00000060  34 65 37 36 37 64 39 37  64 32 03 36 33 37 38 00  |4e767d97d2.6378.|
		00000070  42 08 08 a9 a3 e0 be 06  10 00 5a 26 0a 18 63 6f  |B.........Z&..co|
		00000080  6e 74 72 6f 6c 6c 65 72  2d 72 65 76 69 73 69 6f  |ntroller-revisio|
		00000090  6e 2d 68 61 73 68 12 0a  37 62 62 38 34 63 34 39  |n-hash..7bb84c49|
		000000a0  38 34 5a 15 0a 07 6b 38  73 2d 61 70 70 12 0a 6b  |84Z...k8s-app..k|
		000000b0  75 62 65 2d 70 72 6f 78  79 5a 1c 0a 17 70 6f 64  |ube-proxyZ...pod|
		000000c0  2d 74 65 6d 70 6c 61 74  65 2d 67 65 6e 65 72 61  |-template-gener [truncated 22671 chars]
	 >
	I0317 12:12:59.743773    9924 type.go:168] "Request Body" body=""
	I0317 12:12:59.939489    9924 request.go:661] Waited for 195.7133ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:59.940988    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100-m02
	I0317 12:12:59.940988    9924 round_trippers.go:476] Request Headers:
	I0317 12:12:59.940988    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:12:59.941151    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:12:59.947594    9924 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0317 12:12:59.948127    9924 round_trippers.go:584] Response Headers:
	I0317 12:12:59.948127    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:12:59.948127    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:12:59.948127    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:12:59.948127    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:12:59.948127    9924 round_trippers.go:587]     Content-Length: 3390
	I0317 12:12:59.948127    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:12:59 GMT
	I0317 12:12:59.948127    9924 round_trippers.go:587]     Audit-Id: e004a2a7-9ef8-421f-afb4-2e5724effce6
	I0317 12:12:59.948614    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 a7 1a 0a bd 0f 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 37 38 31 31 30 30  2d 6d 30 32 12 00 1a 00  |e-781100-m02....|
		00000030  22 00 2a 24 34 62 35 63  62 32 63 66 2d 62 38 62  |".*$4b5cb2cf-b8b|
		00000040  64 2d 34 32 66 61 2d 39  36 35 62 2d 33 31 35 33  |d-42fa-965b-3153|
		00000050  31 63 64 65 63 38 36 31  32 03 36 36 36 38 00 42  |1cdec8612.6668.B|
		00000060  08 08 a9 a3 e0 be 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 15722 chars]
	 >
	I0317 12:12:59.948847    9924 pod_ready.go:93] pod "kube-proxy-kc6rf" in "kube-system" namespace has status "Ready":"True"
	I0317 12:12:59.948847    9924 pod_ready.go:82] duration metric: took 394.982ms for pod "kube-proxy-kc6rf" in "kube-system" namespace to be "Ready" ...
	I0317 12:12:59.948950    9924 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-781100" in "kube-system" namespace to be "Ready" ...
	I0317 12:12:59.949083    9924 type.go:168] "Request Body" body=""
	I0317 12:13:00.139319    9924 request.go:661] Waited for 190.1392ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.124:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-781100
	I0317 12:13:00.139319    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-781100
	I0317 12:13:00.139824    9924 round_trippers.go:476] Request Headers:
	I0317 12:13:00.139824    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:13:00.139824    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:13:00.144234    9924 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0317 12:13:00.144506    9924 round_trippers.go:584] Response Headers:
	I0317 12:13:00.144574    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:13:00.144574    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:13:00 GMT
	I0317 12:13:00.144630    9924 round_trippers.go:587]     Audit-Id: a3b4c6f7-7d9f-43b1-8e06-0292308c29cc
	I0317 12:13:00.144630    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:13:00.144630    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:13:00.144630    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:13:00.144751    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  f6 22 0a 81 18 0a 1f 6b  75 62 65 2d 73 63 68 65  |.".....kube-sche|
		00000020  64 75 6c 65 72 2d 6d 75  6c 74 69 6e 6f 64 65 2d  |duler-multinode-|
		00000030  37 38 31 31 30 30 12 00  1a 0b 6b 75 62 65 2d 73  |781100....kube-s|
		00000040  79 73 74 65 6d 22 00 2a  24 32 30 37 33 66 65 62  |ystem".*$2073feb|
		00000050  39 2d 39 35 63 38 2d 34  30 65 34 2d 39 38 62 39  |9-95c8-40e4-98b9|
		00000060  2d 31 39 37 35 39 61 38  64 62 36 65 39 32 03 33  |-19759a8db6e92.3|
		00000070  36 31 38 00 42 08 08 e5  a1 e0 be 06 10 00 5a 1b  |618.B.........Z.|
		00000080  0a 09 63 6f 6d 70 6f 6e  65 6e 74 12 0e 6b 75 62  |..component..kub|
		00000090  65 2d 73 63 68 65 64 75  6c 65 72 5a 15 0a 04 74  |e-schedulerZ...t|
		000000a0  69 65 72 12 0d 63 6f 6e  74 72 6f 6c 2d 70 6c 61  |ier..control-pla|
		000000b0  6e 65 62 3d 0a 19 6b 75  62 65 72 6e 65 74 65 73  |neb=..kubernetes|
		000000c0  2e 69 6f 2f 63 6f 6e 66  69 67 2e 68 61 73 68 12  |.io/config.hash [truncated 21171 chars]
	 >
	I0317 12:13:00.145509    9924 type.go:168] "Request Body" body=""
	I0317 12:13:00.339356    9924 request.go:661] Waited for 193.754ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.124:8443/api/v1/nodes/multinode-781100
	I0317 12:13:00.339356    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes/multinode-781100
	I0317 12:13:00.339356    9924 round_trippers.go:476] Request Headers:
	I0317 12:13:00.339356    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:13:00.339356    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:13:00.343242    9924 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0317 12:13:00.343242    9924 round_trippers.go:584] Response Headers:
	I0317 12:13:00.343242    9924 round_trippers.go:587]     Audit-Id: 3981ca20-0470-4927-8a41-72ac62a900e9
	I0317 12:13:00.343242    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:13:00.343242    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:13:00.343242    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:13:00.343359    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:13:00.343359    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:13:00 GMT
	I0317 12:13:00.343860    9924 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d7 22 0a 8a 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 37 38 31 31 30 30  12 00 1a 00 22 00 2a 24  |e-781100....".*$|
		00000030  61 61 65 38 30 62 63 35  2d 34 33 30 37 2d 34 31  |aae80bc5-4307-41|
		00000040  31 37 2d 39 37 35 37 2d  35 32 62 61 38 31 65 64  |17-9757-52ba81ed|
		00000050  65 37 33 39 32 03 34 35  34 38 00 42 08 08 e2 a1  |e7392.4548.B....|
		00000060  e0 be 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 21015 chars]
	 >
	I0317 12:13:00.344046    9924 pod_ready.go:93] pod "kube-scheduler-multinode-781100" in "kube-system" namespace has status "Ready":"True"
	I0317 12:13:00.344046    9924 pod_ready.go:82] duration metric: took 395.092ms for pod "kube-scheduler-multinode-781100" in "kube-system" namespace to be "Ready" ...
	I0317 12:13:00.344046    9924 pod_ready.go:39] duration metric: took 1.1986459s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 12:13:00.344046    9924 system_svc.go:44] waiting for kubelet service to be running ....
	I0317 12:13:00.355281    9924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0317 12:13:00.381145    9924 system_svc.go:56] duration metric: took 37.0983ms WaitForService to wait for kubelet
	I0317 12:13:00.381145    9924 kubeadm.go:582] duration metric: took 34.0266249s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0317 12:13:00.381271    9924 node_conditions.go:102] verifying NodePressure condition ...
	I0317 12:13:00.381380    9924 type.go:204] "Request Body" body=""
	I0317 12:13:00.539851    9924 request.go:661] Waited for 158.4216ms due to client-side throttling, not priority and fairness, request: GET:https://172.25.16.124:8443/api/v1/nodes
	I0317 12:13:00.539851    9924 round_trippers.go:470] GET https://172.25.16.124:8443/api/v1/nodes
	I0317 12:13:00.539851    9924 round_trippers.go:476] Request Headers:
	I0317 12:13:00.539851    9924 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0317 12:13:00.539851    9924 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0317 12:13:00.545295    9924 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0317 12:13:00.545295    9924 round_trippers.go:584] Response Headers:
	I0317 12:13:00.545295    9924 round_trippers.go:587]     Date: Mon, 17 Mar 2025 12:13:00 GMT
	I0317 12:13:00.545295    9924 round_trippers.go:587]     Audit-Id: 49d91660-d96a-40d6-8abf-27dcdb891c4f
	I0317 12:13:00.545295    9924 round_trippers.go:587]     Cache-Control: no-cache, private
	I0317 12:13:00.545295    9924 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0317 12:13:00.545295    9924 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: c7ba1c88-7bc2-4762-bde9-3c66465ba8d8
	I0317 12:13:00.545295    9924 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: abb28d27-6b12-4bf2-a8e0-84ccd0677998
	I0317 12:13:00.546126    9924 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0e 0a 02  76 31 12 08 4e 6f 64 65  |k8s.....v1..Node|
		00000010  4c 69 73 74 12 8f 3d 0a  09 0a 00 12 03 36 36 37  |List..=......667|
		00000020  1a 00 12 d7 22 0a 8a 11  0a 10 6d 75 6c 74 69 6e  |....".....multin|
		00000030  6f 64 65 2d 37 38 31 31  30 30 12 00 1a 00 22 00  |ode-781100....".|
		00000040  2a 24 61 61 65 38 30 62  63 35 2d 34 33 30 37 2d  |*$aae80bc5-4307-|
		00000050  34 31 31 37 2d 39 37 35  37 2d 35 32 62 61 38 31  |4117-9757-52ba81|
		00000060  65 64 65 37 33 39 32 03  34 35 34 38 00 42 08 08  |ede7392.4548.B..|
		00000070  e2 a1 e0 be 06 10 00 5a  20 0a 17 62 65 74 61 2e  |.......Z ..beta.|
		00000080  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		00000090  63 68 12 05 61 6d 64 36  34 5a 1e 0a 15 62 65 74  |ch..amd64Z...bet|
		000000a0  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		000000b0  6f 73 12 05 6c 69 6e 75  78 5a 1b 0a 12 6b 75 62  |os..linuxZ...kub|
		000000c0  65 72 6e 65 74 65 73 2e  69 6f 2f 61 72 63 68 12  |ernetes.io/arch [truncated 37759 chars]
	 >
	I0317 12:13:00.546392    9924 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0317 12:13:00.546392    9924 node_conditions.go:123] node cpu capacity is 2
	I0317 12:13:00.546392    9924 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0317 12:13:00.546392    9924 node_conditions.go:123] node cpu capacity is 2
	I0317 12:13:00.546392    9924 node_conditions.go:105] duration metric: took 165.1191ms to run NodePressure ...
	I0317 12:13:00.546392    9924 start.go:241] waiting for startup goroutines ...
	I0317 12:13:00.546392    9924 start.go:255] writing updated cluster config ...
	I0317 12:13:00.559829    9924 ssh_runner.go:195] Run: rm -f paused
	I0317 12:13:00.696971    9924 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0317 12:13:00.700996    9924 out.go:177] * Done! kubectl is now configured to use "multinode-781100" cluster and "default" namespace by default
	
	
	==> Docker <==
	Mar 17 12:09:37 multinode-781100 dockerd[1452]: time="2025-03-17T12:09:37.131305206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 17 12:09:37 multinode-781100 dockerd[1452]: time="2025-03-17T12:09:37.152146234Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 17 12:09:37 multinode-781100 dockerd[1452]: time="2025-03-17T12:09:37.152233334Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 17 12:09:37 multinode-781100 dockerd[1452]: time="2025-03-17T12:09:37.152252934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 17 12:09:37 multinode-781100 dockerd[1452]: time="2025-03-17T12:09:37.152403934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 17 12:09:37 multinode-781100 cri-dockerd[1349]: time="2025-03-17T12:09:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/93cf67438b52a55be4e4ab73cdb879b4dffcd8567b0035b9950d1f0692e337b4/resolv.conf as [nameserver 172.25.16.1]"
	Mar 17 12:09:37 multinode-781100 cri-dockerd[1349]: time="2025-03-17T12:09:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4cf816eb824f3ca31189af9d41c0851ac74458d18e65db6fec397aafcec3e0c2/resolv.conf as [nameserver 172.25.16.1]"
	Mar 17 12:09:37 multinode-781100 dockerd[1452]: time="2025-03-17T12:09:37.510332609Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 17 12:09:37 multinode-781100 dockerd[1452]: time="2025-03-17T12:09:37.512109315Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 17 12:09:37 multinode-781100 dockerd[1452]: time="2025-03-17T12:09:37.512345215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 17 12:09:37 multinode-781100 dockerd[1452]: time="2025-03-17T12:09:37.512922818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 17 12:09:37 multinode-781100 dockerd[1452]: time="2025-03-17T12:09:37.709373098Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 17 12:09:37 multinode-781100 dockerd[1452]: time="2025-03-17T12:09:37.711560605Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 17 12:09:37 multinode-781100 dockerd[1452]: time="2025-03-17T12:09:37.711666206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 17 12:09:37 multinode-781100 dockerd[1452]: time="2025-03-17T12:09:37.711865206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 17 12:13:26 multinode-781100 dockerd[1452]: time="2025-03-17T12:13:26.230491166Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 17 12:13:26 multinode-781100 dockerd[1452]: time="2025-03-17T12:13:26.230589267Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 17 12:13:26 multinode-781100 dockerd[1452]: time="2025-03-17T12:13:26.230612067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 17 12:13:26 multinode-781100 dockerd[1452]: time="2025-03-17T12:13:26.230714068Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 17 12:13:26 multinode-781100 cri-dockerd[1349]: time="2025-03-17T12:13:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/355a65411cb2a977dd639f07a52268236470377b1ce0788e2ed03cdafbfcca1f/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Mar 17 12:13:28 multinode-781100 cri-dockerd[1349]: time="2025-03-17T12:13:28Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Mar 17 12:13:28 multinode-781100 dockerd[1452]: time="2025-03-17T12:13:28.337360338Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 17 12:13:28 multinode-781100 dockerd[1452]: time="2025-03-17T12:13:28.337546739Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 17 12:13:28 multinode-781100 dockerd[1452]: time="2025-03-17T12:13:28.337567039Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 17 12:13:28 multinode-781100 dockerd[1452]: time="2025-03-17T12:13:28.338608244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	11fef58630bf4       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   49 seconds ago      Running             busybox                   0                   355a65411cb2a       busybox-58667487b6-vnkbn
	b071c7cb6f746       c69fa2e9cbf5f                                                                                         4 minutes ago       Running             coredns                   0                   4cf816eb824f3       coredns-668d6bf9bc-b8445
	e61cc54799cf2       6e38f40d628db                                                                                         4 minutes ago       Running             storage-provisioner       0                   93cf67438b52a       storage-provisioner
	35f623511419a       kindest/kindnetd@sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495              4 minutes ago       Running             kindnet-cni               0                   e79db8891bd09       kindnet-8pd8m
	1a1ef85b42fd5       f1332858868e1                                                                                         5 minutes ago       Running             kube-proxy                0                   1238eb2baa5e9       kube-proxy-29tvk
	d10d3e89f8130       d8e673e7c9983                                                                                         5 minutes ago       Running             kube-scheduler            0                   1d5ddc729fc16       kube-scheduler-multinode-781100
	97f1c77fb8182       a9e7e6b294baf                                                                                         5 minutes ago       Running             etcd                      0                   66e5d0bfa61e3       etcd-multinode-781100
	58f161d94fdc4       b6a454c5a800d                                                                                         5 minutes ago       Running             kube-controller-manager   0                   39a29ce048ef1       kube-controller-manager-multinode-781100
	1fc32350f0a25       85b7a174738ba                                                                                         5 minutes ago       Running             kube-apiserver            0                   67904207aca81       kube-apiserver-multinode-781100
	
	
	==> coredns [b071c7cb6f74] <==
	[INFO] 10.244.1.2:41467 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000171101s
	[INFO] 10.244.0.3:33682 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000188001s
	[INFO] 10.244.0.3:46818 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000102201s
	[INFO] 10.244.0.3:45369 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000156201s
	[INFO] 10.244.0.3:49706 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000087301s
	[INFO] 10.244.0.3:52680 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000146101s
	[INFO] 10.244.0.3:58827 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000121501s
	[INFO] 10.244.0.3:48066 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000103901s
	[INFO] 10.244.0.3:48303 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000262401s
	[INFO] 10.244.1.2:60106 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000297202s
	[INFO] 10.244.1.2:47647 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000129201s
	[INFO] 10.244.1.2:37921 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000140401s
	[INFO] 10.244.1.2:46938 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000074801s
	[INFO] 10.244.0.3:46748 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000185801s
	[INFO] 10.244.0.3:43098 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000241502s
	[INFO] 10.244.0.3:37779 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000163301s
	[INFO] 10.244.0.3:52031 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000157501s
	[INFO] 10.244.1.2:47347 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000204801s
	[INFO] 10.244.1.2:59951 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000336902s
	[INFO] 10.244.1.2:48878 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000176501s
	[INFO] 10.244.1.2:58890 - 5 "PTR IN 1.16.25.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.0000899s
	[INFO] 10.244.0.3:53150 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000205201s
	[INFO] 10.244.0.3:54120 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000230302s
	[INFO] 10.244.0.3:39029 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000167601s
	[INFO] 10.244.0.3:52701 - 5 "PTR IN 1.16.25.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000199401s
	
	
	==> describe nodes <==
	Name:               multinode-781100
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-781100
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=28b3ce799b018a38b7c40f89b465976263272e76
	                    minikube.k8s.io/name=multinode-781100
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_03_17T12_09_10_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Mar 2025 12:09:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-781100
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Mar 2025 12:14:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Mar 2025 12:13:45 +0000   Mon, 17 Mar 2025 12:09:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Mar 2025 12:13:45 +0000   Mon, 17 Mar 2025 12:09:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Mar 2025 12:13:45 +0000   Mon, 17 Mar 2025 12:09:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Mar 2025 12:13:45 +0000   Mon, 17 Mar 2025 12:09:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.25.16.124
	  Hostname:    multinode-781100
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 df6e3db8568045af95b78c223931c037
	  System UUID:                a42f3ea4-07b6-8540-ae51-1f3c9be9833a
	  Boot ID:                    3da4ec7b-8f30-4e7f-92e7-55db869bb632
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-vnkbn                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kube-system                 coredns-668d6bf9bc-b8445                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     5m3s
	  kube-system                 etcd-multinode-781100                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m8s
	  kube-system                 kindnet-8pd8m                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m3s
	  kube-system                 kube-apiserver-multinode-781100             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m8s
	  kube-system                 kube-controller-manager-multinode-781100    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m8s
	  kube-system                 kube-proxy-29tvk                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m3s
	  kube-system                 kube-scheduler-multinode-781100             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m8s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m                     kube-proxy       
	  Normal  Starting                 5m16s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m16s (x8 over 5m16s)  kubelet          Node multinode-781100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m16s (x8 over 5m16s)  kubelet          Node multinode-781100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m16s (x7 over 5m16s)  kubelet          Node multinode-781100 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m8s                   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m8s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m8s                   kubelet          Node multinode-781100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m8s                   kubelet          Node multinode-781100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m8s                   kubelet          Node multinode-781100 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m4s                   node-controller  Node multinode-781100 event: Registered Node multinode-781100 in Controller
	  Normal  NodeReady                4m41s                  kubelet          Node multinode-781100 status is now: NodeReady
	
	
	Name:               multinode-781100-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-781100-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=28b3ce799b018a38b7c40f89b465976263272e76
	                    minikube.k8s.io/name=multinode-781100
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_03_17T12_12_26_0700
	                    minikube.k8s.io/version=v1.35.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Mar 2025 12:12:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-781100-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Mar 2025 12:14:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Mar 2025 12:13:57 +0000   Mon, 17 Mar 2025 12:12:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Mar 2025 12:13:57 +0000   Mon, 17 Mar 2025 12:12:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Mar 2025 12:13:57 +0000   Mon, 17 Mar 2025 12:12:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Mar 2025 12:13:57 +0000   Mon, 17 Mar 2025 12:12:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.25.25.119
	  Hostname:    multinode-781100-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 20d4930435c648ebb3a90db94dfb5496
	  System UUID:                c3cb59fc-4137-7348-b288-6aefac06bead
	  Boot ID:                    f411dc32-4edc-4662-962c-b91980f4cbf5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-kvm5b    0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kube-system                 kindnet-ntv28               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      112s
	  kube-system                 kube-proxy-kc6rf            0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 100s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  112s (x2 over 112s)  kubelet          Node multinode-781100-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    112s (x2 over 112s)  kubelet          Node multinode-781100-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     112s (x2 over 112s)  kubelet          Node multinode-781100-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  112s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           109s                 node-controller  Node multinode-781100-m02 event: Registered Node multinode-781100-m02 in Controller
	  Normal  NodeReady                79s                  kubelet          Node multinode-781100-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +49.602952] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +0.192774] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[Mar17 12:08] systemd-fstab-generator[1012]: Ignoring "noauto" option for root device
	[  +0.115030] kauditd_printk_skb: 69 callbacks suppressed
	[  +0.537541] systemd-fstab-generator[1053]: Ignoring "noauto" option for root device
	[  +0.190951] systemd-fstab-generator[1065]: Ignoring "noauto" option for root device
	[  +0.248879] systemd-fstab-generator[1079]: Ignoring "noauto" option for root device
	[  +3.219551] systemd-fstab-generator[1302]: Ignoring "noauto" option for root device
	[  +0.204608] systemd-fstab-generator[1314]: Ignoring "noauto" option for root device
	[  +0.185715] systemd-fstab-generator[1326]: Ignoring "noauto" option for root device
	[  +0.293671] systemd-fstab-generator[1342]: Ignoring "noauto" option for root device
	[  +0.125425] kauditd_printk_skb: 190 callbacks suppressed
	[ +11.451227] systemd-fstab-generator[1438]: Ignoring "noauto" option for root device
	[  +0.104059] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.403330] systemd-fstab-generator[1697]: Ignoring "noauto" option for root device
	[  +0.111685] kauditd_printk_skb: 52 callbacks suppressed
	[Mar17 12:09] systemd-fstab-generator[1855]: Ignoring "noauto" option for root device
	[  +0.099828] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.068376] systemd-fstab-generator[2283]: Ignoring "noauto" option for root device
	[  +0.142189] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.796682] systemd-fstab-generator[2394]: Ignoring "noauto" option for root device
	[  +0.212512] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.268289] kauditd_printk_skb: 51 callbacks suppressed
	[Mar17 12:13] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [97f1c77fb818] <==
	{"level":"info","ts":"2025-03-17T12:09:04.139088Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-03-17T12:09:04.139575Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-03-17T12:09:04.134401Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-03-17T12:09:04.147435Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-03-17T12:09:04.152150Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-03-17T12:09:04.169761Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-03-17T12:09:04.222879Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.25.16.124:2379"}
	{"level":"warn","ts":"2025-03-17T12:09:23.255440Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.000998ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-781100\" limit:1 ","response":"range_response_count:1 size:4487"}
	{"level":"info","ts":"2025-03-17T12:09:23.255536Z","caller":"traceutil/trace.go:171","msg":"trace[151662389] range","detail":"{range_begin:/registry/minions/multinode-781100; range_end:; response_count:1; response_revision:407; }","duration":"110.119897ms","start":"2025-03-17T12:09:23.145398Z","end":"2025-03-17T12:09:23.255518Z","steps":["trace[151662389] 'range keys from in-memory index tree'  (duration: 109.790299ms)"],"step_count":1}
	{"level":"warn","ts":"2025-03-17T12:09:23.255392Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"252.395478ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-03-17T12:09:23.256553Z","caller":"traceutil/trace.go:171","msg":"trace[952128721] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/storage-provisioner; range_end:; response_count:0; response_revision:407; }","duration":"253.602474ms","start":"2025-03-17T12:09:23.002936Z","end":"2025-03-17T12:09:23.256538Z","steps":["trace[952128721] 'range keys from in-memory index tree'  (duration: 252.335078ms)"],"step_count":1}
	{"level":"info","ts":"2025-03-17T12:12:18.853659Z","caller":"traceutil/trace.go:171","msg":"trace[356254243] transaction","detail":"{read_only:false; response_revision:579; number_of_response:1; }","duration":"236.561345ms","start":"2025-03-17T12:12:18.617079Z","end":"2025-03-17T12:12:18.853640Z","steps":["trace[356254243] 'process raft request'  (duration: 202.41965ms)","trace[356254243] 'compare'  (duration: 34.023394ms)"],"step_count":2}
	{"level":"info","ts":"2025-03-17T12:12:30.304953Z","caller":"traceutil/trace.go:171","msg":"trace[108875056] transaction","detail":"{read_only:false; response_revision:623; number_of_response:1; }","duration":"122.533422ms","start":"2025-03-17T12:12:30.182403Z","end":"2025-03-17T12:12:30.304936Z","steps":["trace[108875056] 'process raft request'  (duration: 122.29432ms)"],"step_count":1}
	{"level":"warn","ts":"2025-03-17T12:12:30.555288Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.822057ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-03-17T12:12:30.555360Z","caller":"traceutil/trace.go:171","msg":"trace[1874318700] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:623; }","duration":"126.915458ms","start":"2025-03-17T12:12:30.428432Z","end":"2025-03-17T12:12:30.555347Z","steps":["trace[1874318700] 'range keys from in-memory index tree'  (duration: 126.770857ms)"],"step_count":1}
	{"level":"info","ts":"2025-03-17T12:12:35.924111Z","caller":"traceutil/trace.go:171","msg":"trace[1591324506] transaction","detail":"{read_only:false; response_revision:631; number_of_response:1; }","duration":"213.013752ms","start":"2025-03-17T12:12:35.711043Z","end":"2025-03-17T12:12:35.924056Z","steps":["trace[1591324506] 'process raft request'  (duration: 212.664649ms)"],"step_count":1}
	{"level":"warn","ts":"2025-03-17T12:12:36.172378Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"144.674987ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-03-17T12:12:36.173424Z","caller":"traceutil/trace.go:171","msg":"trace[721958768] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:631; }","duration":"145.750795ms","start":"2025-03-17T12:12:36.027656Z","end":"2025-03-17T12:12:36.173407Z","steps":["trace[721958768] 'range keys from in-memory index tree'  (duration: 144.614487ms)"],"step_count":1}
	{"level":"warn","ts":"2025-03-17T12:12:41.809402Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"379.987178ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-03-17T12:12:41.809696Z","caller":"traceutil/trace.go:171","msg":"trace[223008968] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:642; }","duration":"380.298381ms","start":"2025-03-17T12:12:41.429383Z","end":"2025-03-17T12:12:41.809682Z","steps":["trace[223008968] 'range keys from in-memory index tree'  (duration: 379.973178ms)"],"step_count":1}
	{"level":"warn","ts":"2025-03-17T12:12:41.809990Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"299.843329ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/poddisruptionbudgets/\" range_end:\"/registry/poddisruptionbudgets0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-03-17T12:12:41.810063Z","caller":"traceutil/trace.go:171","msg":"trace[448169338] range","detail":"{range_begin:/registry/poddisruptionbudgets/; range_end:/registry/poddisruptionbudgets0; response_count:0; response_revision:642; }","duration":"299.97423ms","start":"2025-03-17T12:12:41.510077Z","end":"2025-03-17T12:12:41.810052Z","steps":["trace[448169338] 'count revisions from in-memory index tree'  (duration: 299.720728ms)"],"step_count":1}
	{"level":"warn","ts":"2025-03-17T12:12:41.810113Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-03-17T12:12:41.510061Z","time spent":"300.03973ms","remote":"127.0.0.1:43268","response type":"/etcdserverpb.KV/Range","request count":0,"request size":68,"response count":0,"response size":27,"request content":"key:\"/registry/poddisruptionbudgets/\" range_end:\"/registry/poddisruptionbudgets0\" count_only:true "}
	{"level":"warn","ts":"2025-03-17T12:12:41.810473Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.80994ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-781100-m02\" limit:1 ","response":"range_response_count:1 size:3148"}
	{"level":"info","ts":"2025-03-17T12:12:41.810595Z","caller":"traceutil/trace.go:171","msg":"trace[707531300] range","detail":"{range_begin:/registry/minions/multinode-781100-m02; range_end:; response_count:1; response_revision:642; }","duration":"140.957542ms","start":"2025-03-17T12:12:41.669628Z","end":"2025-03-17T12:12:41.810586Z","steps":["trace[707531300] 'range keys from in-memory index tree'  (duration: 140.70114ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:14:17 up 7 min,  0 users,  load average: 0.05, 0.25, 0.16
	Linux multinode-781100 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [35f623511419] <==
	I0317 12:13:14.714298       1 main.go:324] Node multinode-781100-m02 has CIDR [10.244.1.0/24] 
	I0317 12:13:24.704356       1 main.go:297] Handling node with IPs: map[172.25.16.124:{}]
	I0317 12:13:24.704403       1 main.go:301] handling current node
	I0317 12:13:24.704420       1 main.go:297] Handling node with IPs: map[172.25.25.119:{}]
	I0317 12:13:24.704885       1 main.go:324] Node multinode-781100-m02 has CIDR [10.244.1.0/24] 
	I0317 12:13:34.707626       1 main.go:297] Handling node with IPs: map[172.25.16.124:{}]
	I0317 12:13:34.707662       1 main.go:301] handling current node
	I0317 12:13:34.707750       1 main.go:297] Handling node with IPs: map[172.25.25.119:{}]
	I0317 12:13:34.707761       1 main.go:324] Node multinode-781100-m02 has CIDR [10.244.1.0/24] 
	I0317 12:13:44.713204       1 main.go:297] Handling node with IPs: map[172.25.16.124:{}]
	I0317 12:13:44.713313       1 main.go:301] handling current node
	I0317 12:13:44.713332       1 main.go:297] Handling node with IPs: map[172.25.25.119:{}]
	I0317 12:13:44.713339       1 main.go:324] Node multinode-781100-m02 has CIDR [10.244.1.0/24] 
	I0317 12:13:54.704137       1 main.go:297] Handling node with IPs: map[172.25.16.124:{}]
	I0317 12:13:54.704240       1 main.go:301] handling current node
	I0317 12:13:54.704324       1 main.go:297] Handling node with IPs: map[172.25.25.119:{}]
	I0317 12:13:54.704492       1 main.go:324] Node multinode-781100-m02 has CIDR [10.244.1.0/24] 
	I0317 12:14:04.710298       1 main.go:297] Handling node with IPs: map[172.25.25.119:{}]
	I0317 12:14:04.710468       1 main.go:324] Node multinode-781100-m02 has CIDR [10.244.1.0/24] 
	I0317 12:14:04.711378       1 main.go:297] Handling node with IPs: map[172.25.16.124:{}]
	I0317 12:14:04.711415       1 main.go:301] handling current node
	I0317 12:14:14.712917       1 main.go:297] Handling node with IPs: map[172.25.16.124:{}]
	I0317 12:14:14.713144       1 main.go:301] handling current node
	I0317 12:14:14.713166       1 main.go:297] Handling node with IPs: map[172.25.25.119:{}]
	I0317 12:14:14.713175       1 main.go:324] Node multinode-781100-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [1fc32350f0a2] <==
	I0317 12:09:07.096707       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0317 12:09:07.105562       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0317 12:09:07.106268       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0317 12:09:08.245461       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0317 12:09:08.334527       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0317 12:09:08.494929       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0317 12:09:08.507928       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.25.16.124]
	I0317 12:09:08.509001       1 controller.go:615] quota admission added evaluator for: endpoints
	I0317 12:09:08.521646       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0317 12:09:09.193387       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0317 12:09:09.454430       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0317 12:09:09.496384       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0317 12:09:09.523675       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0317 12:09:14.638693       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0317 12:09:14.736010       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0317 12:13:31.856230       1 conn.go:339] Error on socket receive: read tcp 172.25.16.124:8443->172.25.16.1:55181: use of closed network connection
	E0317 12:13:32.343615       1 conn.go:339] Error on socket receive: read tcp 172.25.16.124:8443->172.25.16.1:55183: use of closed network connection
	E0317 12:13:32.899663       1 conn.go:339] Error on socket receive: read tcp 172.25.16.124:8443->172.25.16.1:55185: use of closed network connection
	E0317 12:13:33.429575       1 conn.go:339] Error on socket receive: read tcp 172.25.16.124:8443->172.25.16.1:55187: use of closed network connection
	E0317 12:13:33.921958       1 conn.go:339] Error on socket receive: read tcp 172.25.16.124:8443->172.25.16.1:55189: use of closed network connection
	E0317 12:13:34.453629       1 conn.go:339] Error on socket receive: read tcp 172.25.16.124:8443->172.25.16.1:55191: use of closed network connection
	E0317 12:13:35.340364       1 conn.go:339] Error on socket receive: read tcp 172.25.16.124:8443->172.25.16.1:55194: use of closed network connection
	E0317 12:13:45.827562       1 conn.go:339] Error on socket receive: read tcp 172.25.16.124:8443->172.25.16.1:55196: use of closed network connection
	E0317 12:13:46.300211       1 conn.go:339] Error on socket receive: read tcp 172.25.16.124:8443->172.25.16.1:55200: use of closed network connection
	E0317 12:13:56.763617       1 conn.go:339] Error on socket receive: read tcp 172.25.16.124:8443->172.25.16.1:55202: use of closed network connection
	
	
	==> kube-controller-manager [58f161d94fdc] <==
	I0317 12:09:40.132746       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-781100"
	I0317 12:12:25.349516       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-781100-m02\" does not exist"
	I0317 12:12:25.363711       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-781100-m02" podCIDRs=["10.244.1.0/24"]
	I0317 12:12:25.363983       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-781100-m02"
	I0317 12:12:25.364047       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-781100-m02"
	I0317 12:12:25.383690       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-781100-m02"
	I0317 12:12:25.775645       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-781100-m02"
	I0317 12:12:26.378056       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-781100-m02"
	I0317 12:12:28.826539       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-781100-m02"
	I0317 12:12:28.923537       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-781100-m02"
	I0317 12:12:35.462345       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-781100-m02"
	I0317 12:12:56.043485       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-781100-m02"
	I0317 12:12:58.991353       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-781100-m02"
	I0317 12:12:58.993144       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-781100-m02"
	I0317 12:12:59.010087       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-781100-m02"
	I0317 12:13:03.849600       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-781100-m02"
	I0317 12:13:25.565571       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="84.093228ms"
	I0317 12:13:25.610681       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="44.988637ms"
	I0317 12:13:25.612094       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="1.241109ms"
	I0317 12:13:28.442555       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="17.850092ms"
	I0317 12:13:28.443038       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="99.4µs"
	I0317 12:13:29.078182       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="16.507585ms"
	I0317 12:13:29.078490       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="46.001µs"
	I0317 12:13:45.637151       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-781100"
	I0317 12:13:57.629502       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-781100-m02"
	
	
	==> kube-proxy [1a1ef85b42fd] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0317 12:09:17.191240       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0317 12:09:17.242602       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["172.25.16.124"]
	E0317 12:09:17.242822       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0317 12:09:17.319713       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0317 12:09:17.319884       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0317 12:09:17.319939       1 server_linux.go:170] "Using iptables Proxier"
	I0317 12:09:17.324654       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0317 12:09:17.327484       1 server.go:497] "Version info" version="v1.32.2"
	I0317 12:09:17.327572       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0317 12:09:17.337284       1 config.go:199] "Starting service config controller"
	I0317 12:09:17.337358       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0317 12:09:17.337640       1 config.go:329] "Starting node config controller"
	I0317 12:09:17.337676       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0317 12:09:17.340162       1 config.go:105] "Starting endpoint slice config controller"
	I0317 12:09:17.340194       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0317 12:09:17.437637       1 shared_informer.go:320] Caches are synced for service config
	I0317 12:09:17.437861       1 shared_informer.go:320] Caches are synced for node config
	I0317 12:09:17.441312       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [d10d3e89f813] <==
	W0317 12:09:07.286105       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0317 12:09:07.286334       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0317 12:09:07.289259       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0317 12:09:07.289712       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 12:09:07.446118       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0317 12:09:07.446344       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0317 12:09:07.446615       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0317 12:09:07.446640       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 12:09:07.461925       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0317 12:09:07.462098       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 12:09:07.476052       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0317 12:09:07.476088       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0317 12:09:07.558929       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0317 12:09:07.558994       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0317 12:09:07.574539       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0317 12:09:07.574590       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0317 12:09:07.689126       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0317 12:09:07.689183       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 12:09:07.701512       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0317 12:09:07.701963       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 12:09:07.708043       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0317 12:09:07.708412       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0317 12:09:07.828023       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0317 12:09:07.828132       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0317 12:09:10.063185       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 17 12:10:09 multinode-781100 kubelet[2290]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 17 12:10:09 multinode-781100 kubelet[2290]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 17 12:10:09 multinode-781100 kubelet[2290]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 17 12:10:09 multinode-781100 kubelet[2290]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 17 12:11:09 multinode-781100 kubelet[2290]: E0317 12:11:09.589545    2290 iptables.go:577] "Could not set up iptables canary" err=<
	Mar 17 12:11:09 multinode-781100 kubelet[2290]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 17 12:11:09 multinode-781100 kubelet[2290]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 17 12:11:09 multinode-781100 kubelet[2290]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 17 12:11:09 multinode-781100 kubelet[2290]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 17 12:12:09 multinode-781100 kubelet[2290]: E0317 12:12:09.578173    2290 iptables.go:577] "Could not set up iptables canary" err=<
	Mar 17 12:12:09 multinode-781100 kubelet[2290]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 17 12:12:09 multinode-781100 kubelet[2290]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 17 12:12:09 multinode-781100 kubelet[2290]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 17 12:12:09 multinode-781100 kubelet[2290]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 17 12:13:09 multinode-781100 kubelet[2290]: E0317 12:13:09.576852    2290 iptables.go:577] "Could not set up iptables canary" err=<
	Mar 17 12:13:09 multinode-781100 kubelet[2290]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 17 12:13:09 multinode-781100 kubelet[2290]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 17 12:13:09 multinode-781100 kubelet[2290]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 17 12:13:09 multinode-781100 kubelet[2290]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 17 12:13:25 multinode-781100 kubelet[2290]: I0317 12:13:25.752070    2290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hs86\" (UniqueName: \"kubernetes.io/projected/060af5a9-3359-412c-b9f1-b0c58468d483-kube-api-access-4hs86\") pod \"busybox-58667487b6-vnkbn\" (UID: \"060af5a9-3359-412c-b9f1-b0c58468d483\") " pod="default/busybox-58667487b6-vnkbn"
	Mar 17 12:14:09 multinode-781100 kubelet[2290]: E0317 12:14:09.581346    2290 iptables.go:577] "Could not set up iptables canary" err=<
	Mar 17 12:14:09 multinode-781100 kubelet[2290]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 17 12:14:09 multinode-781100 kubelet[2290]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 17 12:14:09 multinode-781100 kubelet[2290]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 17 12:14:09 multinode-781100 kubelet[2290]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-781100 -n multinode-781100
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-781100 -n multinode-781100: (12.1339342s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-781100 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (58.57s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (291s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-781100
multinode_test.go:321: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-781100
E0317 12:31:12.723048    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-758100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-781100: (1m39.4150471s)
multinode_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-781100 --wait=true -v=8 --alsologtostderr
E0317 12:33:37.871909    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-331000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-781100 --wait=true -v=8 --alsologtostderr: exit status 90 (2m58.7745081s)

                                                
                                                
-- stdout --
	* [multinode-781100] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5608 Build 19045.5608
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=20535
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting "multinode-781100" primary control-plane node in "multinode-781100" cluster
	* Restarting existing hyperv VM for "multinode-781100" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0317 12:31:32.333321    2224 out.go:345] Setting OutFile to fd 820 ...
	I0317 12:31:32.420002    2224 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 12:31:32.420002    2224 out.go:358] Setting ErrFile to fd 1528...
	I0317 12:31:32.420002    2224 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 12:31:32.442480    2224 out.go:352] Setting JSON to false
	I0317 12:31:32.446550    2224 start.go:129] hostinfo: {"hostname":"minikube6","uptime":8469,"bootTime":1742206223,"procs":185,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5608 Build 19045.5608","kernelVersion":"10.0.19045.5608 Build 19045.5608","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0317 12:31:32.446550    2224 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0317 12:31:32.681896    2224 out.go:177] * [multinode-781100] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5608 Build 19045.5608
	I0317 12:31:32.712744    2224 notify.go:220] Checking for updates...
	I0317 12:31:32.719318    2224 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0317 12:31:32.730475    2224 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0317 12:31:32.745034    2224 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0317 12:31:32.759799    2224 out.go:177]   - MINIKUBE_LOCATION=20535
	I0317 12:31:32.771145    2224 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0317 12:31:32.776345    2224 config.go:182] Loaded profile config "multinode-781100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0317 12:31:32.776611    2224 driver.go:394] Setting default libvirt URI to qemu:///system
	I0317 12:31:38.302169    2224 out.go:177] * Using the hyperv driver based on existing profile
	I0317 12:31:38.305987    2224 start.go:297] selected driver: hyperv
	I0317 12:31:38.305987    2224 start.go:901] validating driver "hyperv" against &{Name:multinode-781100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 Cluste
rName:multinode-781100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.16.124 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.25.25.119 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.25.16.223 Port:0 KubernetesVersion:v1.32.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 12:31:38.306690    2224 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0317 12:31:38.368493    2224 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0317 12:31:38.368675    2224 cni.go:84] Creating CNI manager for ""
	I0317 12:31:38.368763    2224 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0317 12:31:38.368816    2224 start.go:340] cluster config:
	{Name:multinode-781100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:multinode-781100 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.16.124 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.25.25.119 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.25.16.223 Port:0 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false ku
beflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 12:31:38.368816    2224 iso.go:125] acquiring lock: {Name:mk49322cc4182124f5e9cd1631076166a7ff463d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0317 12:31:38.489131    2224 out.go:177] * Starting "multinode-781100" primary control-plane node in "multinode-781100" cluster
	I0317 12:31:38.501995    2224 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0317 12:31:38.503019    2224 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4
	I0317 12:31:38.503019    2224 cache.go:56] Caching tarball of preloaded images
	I0317 12:31:38.503424    2224 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0317 12:31:38.503424    2224 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on docker
	I0317 12:31:38.504244    2224 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-781100\config.json ...
	I0317 12:31:38.506398    2224 start.go:360] acquireMachinesLock for multinode-781100: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0317 12:31:38.507215    2224 start.go:364] duration metric: took 264.1µs to acquireMachinesLock for "multinode-781100"
	I0317 12:31:38.507292    2224 start.go:96] Skipping create...Using existing machine configuration
	I0317 12:31:38.507292    2224 fix.go:54] fixHost starting: 
	I0317 12:31:38.508222    2224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100 ).state
	I0317 12:31:41.256251    2224 main.go:141] libmachine: [stdout =====>] : Off
	
	I0317 12:31:41.256738    2224 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:31:41.256738    2224 fix.go:112] recreateIfNeeded on multinode-781100: state=Stopped err=<nil>
	W0317 12:31:41.256818    2224 fix.go:138] unexpected machine state, will restart: <nil>
	I0317 12:31:41.313420    2224 out.go:177] * Restarting existing hyperv VM for "multinode-781100" ...
	I0317 12:31:41.403033    2224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-781100
	I0317 12:31:44.515622    2224 main.go:141] libmachine: [stdout =====>] : 
	I0317 12:31:44.516147    2224 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:31:44.516147    2224 main.go:141] libmachine: Waiting for host to start...
	I0317 12:31:44.516205    2224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100 ).state
	I0317 12:31:46.742067    2224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:31:46.742607    2224 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:31:46.742607    2224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100 ).networkadapters[0]).ipaddresses[0]
	I0317 12:31:49.215353    2224 main.go:141] libmachine: [stdout =====>] : 
	I0317 12:31:49.215353    2224 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:31:50.216115    2224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100 ).state
	I0317 12:31:52.397239    2224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:31:52.397633    2224 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:31:52.397705    2224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100 ).networkadapters[0]).ipaddresses[0]
	I0317 12:31:54.922982    2224 main.go:141] libmachine: [stdout =====>] : 
	I0317 12:31:54.922982    2224 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:31:55.923520    2224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100 ).state
	I0317 12:31:58.134782    2224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:31:58.134953    2224 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:31:58.135122    2224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100 ).networkadapters[0]).ipaddresses[0]
	I0317 12:32:00.629152    2224 main.go:141] libmachine: [stdout =====>] : 
	I0317 12:32:00.629456    2224 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:32:01.629780    2224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100 ).state
	I0317 12:32:03.877019    2224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:32:03.877019    2224 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:32:03.877019    2224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100 ).networkadapters[0]).ipaddresses[0]
	I0317 12:32:06.476282    2224 main.go:141] libmachine: [stdout =====>] : 
	I0317 12:32:06.476282    2224 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:32:07.477202    2224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100 ).state
	I0317 12:32:09.738436    2224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:32:09.738436    2224 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:32:09.738436    2224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100 ).networkadapters[0]).ipaddresses[0]
	I0317 12:32:12.295439    2224 main.go:141] libmachine: [stdout =====>] : 172.25.16.109
	
	I0317 12:32:12.295439    2224 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:32:12.298429    2224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100 ).state
	I0317 12:32:14.505383    2224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:32:14.506198    2224 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:32:14.506235    2224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100 ).networkadapters[0]).ipaddresses[0]
	I0317 12:32:17.147587    2224 main.go:141] libmachine: [stdout =====>] : 172.25.16.109
	
	I0317 12:32:17.147587    2224 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:32:17.148724    2224 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-781100\config.json ...
	I0317 12:32:17.151795    2224 machine.go:93] provisionDockerMachine start ...
	I0317 12:32:17.151907    2224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100 ).state
	I0317 12:32:19.324255    2224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:32:19.324474    2224 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:32:19.324602    2224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100 ).networkadapters[0]).ipaddresses[0]
	I0317 12:32:21.843556    2224 main.go:141] libmachine: [stdout =====>] : 172.25.16.109
	
	I0317 12:32:21.843556    2224 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:32:21.850368    2224 main.go:141] libmachine: Using SSH client type: native
	I0317 12:32:21.851059    2224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.16.109 22 <nil> <nil>}
	I0317 12:32:21.851262    2224 main.go:141] libmachine: About to run SSH command:
	hostname
	I0317 12:32:21.990513    2224 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0317 12:32:21.990605    2224 buildroot.go:166] provisioning hostname "multinode-781100"
	I0317 12:32:21.990717    2224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100 ).state
	I0317 12:32:24.133863    2224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:32:24.133863    2224 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:32:24.134171    2224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100 ).networkadapters[0]).ipaddresses[0]
	I0317 12:32:26.669197    2224 main.go:141] libmachine: [stdout =====>] : 172.25.16.109
	
	I0317 12:32:26.669563    2224 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:32:26.675386    2224 main.go:141] libmachine: Using SSH client type: native
	I0317 12:32:26.675923    2224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.16.109 22 <nil> <nil>}
	I0317 12:32:26.675923    2224 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-781100 && echo "multinode-781100" | sudo tee /etc/hostname
	I0317 12:32:26.845712    2224 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-781100
	
	I0317 12:32:26.845712    2224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100 ).state
	I0317 12:32:28.977068    2224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:32:28.977148    2224 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:32:28.977148    2224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100 ).networkadapters[0]).ipaddresses[0]
	I0317 12:32:31.586297    2224 main.go:141] libmachine: [stdout =====>] : 172.25.16.109
	
	I0317 12:32:31.586297    2224 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:32:31.592844    2224 main.go:141] libmachine: Using SSH client type: native
	I0317 12:32:31.593524    2224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.16.109 22 <nil> <nil>}
	I0317 12:32:31.593524    2224 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-781100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-781100/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-781100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0317 12:32:31.759234    2224 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0317 12:32:31.759345    2224 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0317 12:32:31.759514    2224 buildroot.go:174] setting up certificates
	I0317 12:32:31.759603    2224 provision.go:84] configureAuth start
	I0317 12:32:31.759670    2224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100 ).state
	I0317 12:32:33.925493    2224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:32:33.925493    2224 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:32:33.926542    2224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100 ).networkadapters[0]).ipaddresses[0]
	I0317 12:32:36.476866    2224 main.go:141] libmachine: [stdout =====>] : 172.25.16.109
	
	I0317 12:32:36.477207    2224 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:32:36.477294    2224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100 ).state
	I0317 12:32:38.714428    2224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:32:38.714428    2224 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:32:38.714428    2224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100 ).networkadapters[0]).ipaddresses[0]
	I0317 12:32:41.327868    2224 main.go:141] libmachine: [stdout =====>] : 172.25.16.109
	
	I0317 12:32:41.328143    2224 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:32:41.328143    2224 provision.go:143] copyHostCerts
	I0317 12:32:41.328143    2224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0317 12:32:41.328755    2224 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0317 12:32:41.328755    2224 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0317 12:32:41.329092    2224 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0317 12:32:41.331711    2224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0317 12:32:41.331885    2224 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0317 12:32:41.331885    2224 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0317 12:32:41.331885    2224 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0317 12:32:41.333219    2224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0317 12:32:41.333219    2224 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0317 12:32:41.333219    2224 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0317 12:32:41.333757    2224 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0317 12:32:41.334876    2224 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-781100 san=[127.0.0.1 172.25.16.109 localhost minikube multinode-781100]
	I0317 12:32:41.365998    2224 provision.go:177] copyRemoteCerts
	I0317 12:32:41.378886    2224 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0317 12:32:41.378958    2224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100 ).state
	I0317 12:32:43.518053    2224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:32:43.519146    2224 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:32:43.519231    2224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100 ).networkadapters[0]).ipaddresses[0]
	I0317 12:32:46.083662    2224 main.go:141] libmachine: [stdout =====>] : 172.25.16.109
	
	I0317 12:32:46.083662    2224 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:32:46.084929    2224 sshutil.go:53] new ssh client: &{IP:172.25.16.109 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-781100\id_rsa Username:docker}
	I0317 12:32:46.198821    2224 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8198098s)
	I0317 12:32:46.198821    2224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0317 12:32:46.198821    2224 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0317 12:32:46.246543    2224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0317 12:32:46.246999    2224 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0317 12:32:46.300775    2224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0317 12:32:46.300775    2224 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0317 12:32:46.346129    2224 provision.go:87] duration metric: took 14.5863653s to configureAuth
	I0317 12:32:46.346129    2224 buildroot.go:189] setting minikube options for container-runtime
	I0317 12:32:46.347043    2224 config.go:182] Loaded profile config "multinode-781100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0317 12:32:46.347118    2224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100 ).state
	I0317 12:32:48.490321    2224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:32:48.490619    2224 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:32:48.490619    2224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100 ).networkadapters[0]).ipaddresses[0]
	I0317 12:32:51.049754    2224 main.go:141] libmachine: [stdout =====>] : 172.25.16.109
	
	I0317 12:32:51.049754    2224 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:32:51.056302    2224 main.go:141] libmachine: Using SSH client type: native
	I0317 12:32:51.057061    2224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.16.109 22 <nil> <nil>}
	I0317 12:32:51.057061    2224 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0317 12:32:51.202568    2224 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0317 12:32:51.202568    2224 buildroot.go:70] root file system type: tmpfs
	I0317 12:32:51.202774    2224 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0317 12:32:51.202875    2224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100 ).state
	I0317 12:32:53.330215    2224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:32:53.330215    2224 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:32:53.330489    2224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100 ).networkadapters[0]).ipaddresses[0]
	I0317 12:32:55.847165    2224 main.go:141] libmachine: [stdout =====>] : 172.25.16.109
	
	I0317 12:32:55.847334    2224 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:32:55.853518    2224 main.go:141] libmachine: Using SSH client type: native
	I0317 12:32:55.854046    2224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.16.109 22 <nil> <nil>}
	I0317 12:32:55.854240    2224 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0317 12:32:56.022070    2224 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0317 12:32:56.022271    2224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100 ).state
	I0317 12:32:58.136815    2224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:32:58.136901    2224 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:32:58.137033    2224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100 ).networkadapters[0]).ipaddresses[0]
	I0317 12:33:00.679994    2224 main.go:141] libmachine: [stdout =====>] : 172.25.16.109
	
	I0317 12:33:00.679994    2224 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:33:00.685156    2224 main.go:141] libmachine: Using SSH client type: native
	I0317 12:33:00.685813    2224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.16.109 22 <nil> <nil>}
	I0317 12:33:00.685926    2224 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0317 12:33:03.258092    2224 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0317 12:33:03.258092    2224 machine.go:96] duration metric: took 46.1057907s to provisionDockerMachine
	I0317 12:33:03.258092    2224 start.go:293] postStartSetup for "multinode-781100" (driver="hyperv")
	I0317 12:33:03.258092    2224 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0317 12:33:03.270843    2224 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0317 12:33:03.270843    2224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100 ).state
	I0317 12:33:05.395481    2224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:33:05.395481    2224 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:33:05.395481    2224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100 ).networkadapters[0]).ipaddresses[0]
	I0317 12:33:08.017289    2224 main.go:141] libmachine: [stdout =====>] : 172.25.16.109
	
	I0317 12:33:08.018134    2224 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:33:08.018829    2224 sshutil.go:53] new ssh client: &{IP:172.25.16.109 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-781100\id_rsa Username:docker}
	I0317 12:33:08.137946    2224 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8670489s)
	I0317 12:33:08.149640    2224 ssh_runner.go:195] Run: cat /etc/os-release
	I0317 12:33:08.156859    2224 command_runner.go:130] > NAME=Buildroot
	I0317 12:33:08.156859    2224 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0317 12:33:08.156859    2224 command_runner.go:130] > ID=buildroot
	I0317 12:33:08.156859    2224 command_runner.go:130] > VERSION_ID=2023.02.9
	I0317 12:33:08.156859    2224 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0317 12:33:08.156859    2224 info.go:137] Remote host: Buildroot 2023.02.9
	I0317 12:33:08.156859    2224 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0317 12:33:08.157676    2224 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0317 12:33:08.158628    2224 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem -> 89402.pem in /etc/ssl/certs
	I0317 12:33:08.158686    2224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem -> /etc/ssl/certs/89402.pem
	I0317 12:33:08.170629    2224 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0317 12:33:08.188304    2224 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem --> /etc/ssl/certs/89402.pem (1708 bytes)
	I0317 12:33:08.235102    2224 start.go:296] duration metric: took 4.976955s for postStartSetup
	I0317 12:33:08.235261    2224 fix.go:56] duration metric: took 1m29.7269826s for fixHost
	I0317 12:33:08.235376    2224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100 ).state
	I0317 12:33:10.352576    2224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:33:10.352576    2224 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:33:10.352576    2224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100 ).networkadapters[0]).ipaddresses[0]
	I0317 12:33:12.957622    2224 main.go:141] libmachine: [stdout =====>] : 172.25.16.109
	
	I0317 12:33:12.958780    2224 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:33:12.964892    2224 main.go:141] libmachine: Using SSH client type: native
	I0317 12:33:12.965526    2224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.16.109 22 <nil> <nil>}
	I0317 12:33:12.965526    2224 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0317 12:33:13.107048    2224 main.go:141] libmachine: SSH cmd err, output: <nil>: 1742214793.132163064
	
	I0317 12:33:13.107155    2224 fix.go:216] guest clock: 1742214793.132163064
	I0317 12:33:13.107155    2224 fix.go:229] Guest: 2025-03-17 12:33:13.132163064 +0000 UTC Remote: 2025-03-17 12:33:08.2353224 +0000 UTC m=+95.999623901 (delta=4.896840664s)
	I0317 12:33:13.107315    2224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100 ).state
	I0317 12:33:15.339701    2224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:33:15.339701    2224 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:33:15.339701    2224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100 ).networkadapters[0]).ipaddresses[0]
	I0317 12:33:17.904987    2224 main.go:141] libmachine: [stdout =====>] : 172.25.16.109
	
	I0317 12:33:17.905790    2224 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:33:17.911013    2224 main.go:141] libmachine: Using SSH client type: native
	I0317 12:33:17.911618    2224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.16.109 22 <nil> <nil>}
	I0317 12:33:17.911618    2224 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1742214793
	I0317 12:33:18.069918    2224 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Mar 17 12:33:13 UTC 2025
	
	I0317 12:33:18.070055    2224 fix.go:236] clock set: Mon Mar 17 12:33:13 UTC 2025
	 (err=<nil>)
	I0317 12:33:18.070055    2224 start.go:83] releasing machines lock for "multinode-781100", held for 1m39.5617458s
	I0317 12:33:18.070418    2224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100 ).state
	I0317 12:33:20.195036    2224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:33:20.195036    2224 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:33:20.196172    2224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100 ).networkadapters[0]).ipaddresses[0]
	I0317 12:33:22.833502    2224 main.go:141] libmachine: [stdout =====>] : 172.25.16.109
	
	I0317 12:33:22.833818    2224 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:33:22.838334    2224 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0317 12:33:22.838556    2224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100 ).state
	I0317 12:33:22.850115    2224 ssh_runner.go:195] Run: cat /version.json
	I0317 12:33:22.850115    2224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100 ).state
	I0317 12:33:25.072028    2224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:33:25.072028    2224 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:33:25.072028    2224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100 ).networkadapters[0]).ipaddresses[0]
	I0317 12:33:25.089113    2224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:33:25.089284    2224 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:33:25.089418    2224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100 ).networkadapters[0]).ipaddresses[0]
	I0317 12:33:27.861299    2224 main.go:141] libmachine: [stdout =====>] : 172.25.16.109
	
	I0317 12:33:27.861299    2224 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:33:27.862843    2224 sshutil.go:53] new ssh client: &{IP:172.25.16.109 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-781100\id_rsa Username:docker}
	I0317 12:33:27.878024    2224 main.go:141] libmachine: [stdout =====>] : 172.25.16.109
	
	I0317 12:33:27.878024    2224 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:33:27.878920    2224 sshutil.go:53] new ssh client: &{IP:172.25.16.109 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-781100\id_rsa Username:docker}
	I0317 12:33:27.957979    2224 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0317 12:33:27.958115    2224 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.1196662s)
	W0317 12:33:27.958115    2224 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0317 12:33:27.973948    2224 command_runner.go:130] > {"iso_version": "v1.35.0", "kicbase_version": "v0.0.45-1736763277-20236", "minikube_version": "v1.35.0", "commit": "3fb24bd87c8c8761e2515e1a9ee13835a389ed68"}
	I0317 12:33:27.973948    2224 ssh_runner.go:235] Completed: cat /version.json: (5.1237767s)
	I0317 12:33:27.986889    2224 ssh_runner.go:195] Run: systemctl --version
	I0317 12:33:27.995790    2224 command_runner.go:130] > systemd 252 (252)
	I0317 12:33:27.995906    2224 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0317 12:33:28.008690    2224 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0317 12:33:28.016540    2224 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0317 12:33:28.016582    2224 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0317 12:33:28.028118    2224 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	W0317 12:33:28.056137    2224 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0317 12:33:28.056186    2224 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0317 12:33:28.057452    2224 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0317 12:33:28.057780    2224 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0317 12:33:28.057864    2224 start.go:495] detecting cgroup driver to use...
	I0317 12:33:28.057864    2224 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0317 12:33:28.093125    2224 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0317 12:33:28.105789    2224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0317 12:33:28.138259    2224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0317 12:33:28.156717    2224 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0317 12:33:28.168413    2224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0317 12:33:28.197455    2224 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0317 12:33:28.229311    2224 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0317 12:33:28.262656    2224 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0317 12:33:28.296570    2224 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0317 12:33:28.328015    2224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0317 12:33:28.360477    2224 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0317 12:33:28.391174    2224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0317 12:33:28.422107    2224 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0317 12:33:28.444286    2224 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0317 12:33:28.445058    2224 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0317 12:33:28.456914    2224 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0317 12:33:28.497869    2224 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0317 12:33:28.526953    2224 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 12:33:28.721072    2224 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0317 12:33:28.752135    2224 start.go:495] detecting cgroup driver to use...
	I0317 12:33:28.766159    2224 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0317 12:33:28.787541    2224 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0317 12:33:28.787541    2224 command_runner.go:130] > [Unit]
	I0317 12:33:28.787686    2224 command_runner.go:130] > Description=Docker Application Container Engine
	I0317 12:33:28.787686    2224 command_runner.go:130] > Documentation=https://docs.docker.com
	I0317 12:33:28.787686    2224 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0317 12:33:28.787686    2224 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0317 12:33:28.787780    2224 command_runner.go:130] > StartLimitBurst=3
	I0317 12:33:28.787780    2224 command_runner.go:130] > StartLimitIntervalSec=60
	I0317 12:33:28.787780    2224 command_runner.go:130] > [Service]
	I0317 12:33:28.787780    2224 command_runner.go:130] > Type=notify
	I0317 12:33:28.787780    2224 command_runner.go:130] > Restart=on-failure
	I0317 12:33:28.787877    2224 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0317 12:33:28.787936    2224 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0317 12:33:28.787936    2224 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0317 12:33:28.787936    2224 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0317 12:33:28.789019    2224 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0317 12:33:28.789044    2224 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0317 12:33:28.789068    2224 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0317 12:33:28.789068    2224 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0317 12:33:28.789211    2224 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0317 12:33:28.789211    2224 command_runner.go:130] > ExecStart=
	I0317 12:33:28.789350    2224 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0317 12:33:28.789350    2224 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0317 12:33:28.789350    2224 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0317 12:33:28.789458    2224 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0317 12:33:28.789458    2224 command_runner.go:130] > LimitNOFILE=infinity
	I0317 12:33:28.789458    2224 command_runner.go:130] > LimitNPROC=infinity
	I0317 12:33:28.789458    2224 command_runner.go:130] > LimitCORE=infinity
	I0317 12:33:28.789458    2224 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0317 12:33:28.789564    2224 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0317 12:33:28.789564    2224 command_runner.go:130] > TasksMax=infinity
	I0317 12:33:28.789564    2224 command_runner.go:130] > TimeoutStartSec=0
	I0317 12:33:28.789564    2224 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0317 12:33:28.789670    2224 command_runner.go:130] > Delegate=yes
	I0317 12:33:28.789670    2224 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0317 12:33:28.789670    2224 command_runner.go:130] > KillMode=process
	I0317 12:33:28.789670    2224 command_runner.go:130] > [Install]
	I0317 12:33:28.789779    2224 command_runner.go:130] > WantedBy=multi-user.target
	I0317 12:33:28.803449    2224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0317 12:33:28.842933    2224 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0317 12:33:28.886387    2224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0317 12:33:28.923181    2224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0317 12:33:28.954392    2224 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0317 12:33:29.014882    2224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0317 12:33:29.036666    2224 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0317 12:33:29.073507    2224 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0317 12:33:29.086202    2224 ssh_runner.go:195] Run: which cri-dockerd
	I0317 12:33:29.091348    2224 command_runner.go:130] > /usr/bin/cri-dockerd
	I0317 12:33:29.103511    2224 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0317 12:33:29.119711    2224 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0317 12:33:29.167527    2224 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0317 12:33:29.353448    2224 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0317 12:33:29.547724    2224 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0317 12:33:29.547948    2224 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0317 12:33:29.593286    2224 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 12:33:29.782305    2224 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0317 12:34:30.886521    2224 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0317 12:34:30.886648    2224 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0317 12:34:30.889018    2224 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.105935s)
	I0317 12:34:30.900449    2224 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0317 12:34:30.923798    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 systemd[1]: Starting Docker Application Container Engine...
	I0317 12:34:30.923831    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[646]: time="2025-03-17T12:33:01.330353567Z" level=info msg="Starting up"
	I0317 12:34:30.923831    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[646]: time="2025-03-17T12:33:01.332500904Z" level=info msg="containerd not running, starting managed containerd"
	I0317 12:34:30.924031    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[646]: time="2025-03-17T12:33:01.333761084Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=652
	I0317 12:34:30.924031    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.366201452Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	I0317 12:34:30.924105    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.393353482Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0317 12:34:30.924158    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.393395585Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0317 12:34:30.924186    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.393460289Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0317 12:34:30.924186    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.393477790Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0317 12:34:30.924275    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.394137932Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0317 12:34:30.924275    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.394242639Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0317 12:34:30.924275    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.394391948Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0317 12:34:30.924275    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.394484954Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0317 12:34:30.924275    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.394506055Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0317 12:34:30.924275    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.394517456Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0317 12:34:30.924275    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.395142696Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0317 12:34:30.924275    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.396159161Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0317 12:34:30.924275    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.399176253Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0317 12:34:30.924275    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.399290960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0317 12:34:30.924275    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.399422469Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0317 12:34:30.924275    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.399513375Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0317 12:34:30.924275    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.400285824Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0317 12:34:30.924275    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.400459035Z" level=info msg="metadata content store policy set" policy=shared
	I0317 12:34:30.924275    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.408116723Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0317 12:34:30.924275    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.408204828Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0317 12:34:30.924275    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.408227130Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0317 12:34:30.924275    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.408249831Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0317 12:34:30.924275    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.408267032Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0317 12:34:30.924275    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.408336337Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0317 12:34:30.924275    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.408824668Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0317 12:34:30.924275    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409235094Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0317 12:34:30.924275    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409344201Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0317 12:34:30.924275    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409371803Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0317 12:34:30.924275    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409387904Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0317 12:34:30.924847    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409401505Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0317 12:34:30.924847    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409414706Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0317 12:34:30.924847    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409439007Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0317 12:34:30.924847    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409456508Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0317 12:34:30.924847    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409469809Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0317 12:34:30.924847    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409482410Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0317 12:34:30.925020    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409494311Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0317 12:34:30.925061    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409514612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0317 12:34:30.925104    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409529913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0317 12:34:30.925163    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409548014Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0317 12:34:30.925163    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409561515Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0317 12:34:30.925163    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409574016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0317 12:34:30.925253    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409586717Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0317 12:34:30.925276    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409598417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0317 12:34:30.925299    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409669122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0317 12:34:30.925299    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409707224Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0317 12:34:30.925367    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409742026Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0317 12:34:30.925367    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409754627Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0317 12:34:30.925417    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409766928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0317 12:34:30.925417    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409781029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0317 12:34:30.925417    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409797430Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0317 12:34:30.925417    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409819031Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0317 12:34:30.925417    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409833732Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0317 12:34:30.925417    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409845433Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0317 12:34:30.925417    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409982742Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0317 12:34:30.925417    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.410016344Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0317 12:34:30.925417    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.410049146Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0317 12:34:30.925417    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.410066147Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0317 12:34:30.925417    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.410166353Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0317 12:34:30.925417    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.410195455Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0317 12:34:30.925417    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.410209756Z" level=info msg="NRI interface is disabled by configuration."
	I0317 12:34:30.925417    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.410419170Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0317 12:34:30.925417    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.410577580Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0317 12:34:30.925417    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.410633383Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0317 12:34:30.925417    2224 command_runner.go:130] > Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.410676986Z" level=info msg="containerd successfully booted in 0.047393s"
	I0317 12:34:30.925417    2224 command_runner.go:130] > Mar 17 12:33:02 multinode-781100 dockerd[646]: time="2025-03-17T12:33:02.392455186Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0317 12:34:30.925417    2224 command_runner.go:130] > Mar 17 12:33:02 multinode-781100 dockerd[646]: time="2025-03-17T12:33:02.617246117Z" level=info msg="Loading containers: start."
	I0317 12:34:30.925417    2224 command_runner.go:130] > Mar 17 12:33:02 multinode-781100 dockerd[646]: time="2025-03-17T12:33:02.934372964Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0317 12:34:30.925417    2224 command_runner.go:130] > Mar 17 12:33:03 multinode-781100 dockerd[646]: time="2025-03-17T12:33:03.092163796Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0317 12:34:30.925417    2224 command_runner.go:130] > Mar 17 12:33:03 multinode-781100 dockerd[646]: time="2025-03-17T12:33:03.210189769Z" level=info msg="Loading containers: done."
	I0317 12:34:30.925417    2224 command_runner.go:130] > Mar 17 12:33:03 multinode-781100 dockerd[646]: time="2025-03-17T12:33:03.236346158Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	I0317 12:34:30.925417    2224 command_runner.go:130] > Mar 17 12:33:03 multinode-781100 dockerd[646]: time="2025-03-17T12:33:03.236465667Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	I0317 12:34:30.925417    2224 command_runner.go:130] > Mar 17 12:33:03 multinode-781100 dockerd[646]: time="2025-03-17T12:33:03.236494969Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	I0317 12:34:30.925417    2224 command_runner.go:130] > Mar 17 12:33:03 multinode-781100 dockerd[646]: time="2025-03-17T12:33:03.237280029Z" level=info msg="Daemon has completed initialization"
	I0317 12:34:30.925995    2224 command_runner.go:130] > Mar 17 12:33:03 multinode-781100 dockerd[646]: time="2025-03-17T12:33:03.280313201Z" level=info msg="API listen on /var/run/docker.sock"
	I0317 12:34:30.925995    2224 command_runner.go:130] > Mar 17 12:33:03 multinode-781100 systemd[1]: Started Docker Application Container Engine.
	I0317 12:34:30.925995    2224 command_runner.go:130] > Mar 17 12:33:03 multinode-781100 dockerd[646]: time="2025-03-17T12:33:03.281066158Z" level=info msg="API listen on [::]:2376"
	I0317 12:34:30.925995    2224 command_runner.go:130] > Mar 17 12:33:29 multinode-781100 systemd[1]: Stopping Docker Application Container Engine...
	I0317 12:34:30.925995    2224 command_runner.go:130] > Mar 17 12:33:29 multinode-781100 dockerd[646]: time="2025-03-17T12:33:29.831988134Z" level=info msg="Processing signal 'terminated'"
	I0317 12:34:30.925995    2224 command_runner.go:130] > Mar 17 12:33:29 multinode-781100 dockerd[646]: time="2025-03-17T12:33:29.834398652Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0317 12:34:30.926149    2224 command_runner.go:130] > Mar 17 12:33:29 multinode-781100 dockerd[646]: time="2025-03-17T12:33:29.834580253Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0317 12:34:30.926149    2224 command_runner.go:130] > Mar 17 12:33:29 multinode-781100 dockerd[646]: time="2025-03-17T12:33:29.835000556Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0317 12:34:30.926267    2224 command_runner.go:130] > Mar 17 12:33:29 multinode-781100 dockerd[646]: time="2025-03-17T12:33:29.834619553Z" level=info msg="Daemon shutdown complete"
	I0317 12:34:30.926267    2224 command_runner.go:130] > Mar 17 12:33:30 multinode-781100 systemd[1]: docker.service: Deactivated successfully.
	I0317 12:34:30.926267    2224 command_runner.go:130] > Mar 17 12:33:30 multinode-781100 systemd[1]: Stopped Docker Application Container Engine.
	I0317 12:34:30.926267    2224 command_runner.go:130] > Mar 17 12:33:30 multinode-781100 systemd[1]: Starting Docker Application Container Engine...
	I0317 12:34:30.926267    2224 command_runner.go:130] > Mar 17 12:33:30 multinode-781100 dockerd[1095]: time="2025-03-17T12:33:30.890408016Z" level=info msg="Starting up"
	I0317 12:34:30.926358    2224 command_runner.go:130] > Mar 17 12:34:30 multinode-781100 dockerd[1095]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0317 12:34:30.926358    2224 command_runner.go:130] > Mar 17 12:34:30 multinode-781100 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0317 12:34:30.926358    2224 command_runner.go:130] > Mar 17 12:34:30 multinode-781100 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0317 12:34:30.926358    2224 command_runner.go:130] > Mar 17 12:34:30 multinode-781100 systemd[1]: Failed to start Docker Application Container Engine.
	I0317 12:34:30.935758    2224 out.go:201] 
	W0317 12:34:30.938845    2224 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Mar 17 12:33:01 multinode-781100 systemd[1]: Starting Docker Application Container Engine...
	Mar 17 12:33:01 multinode-781100 dockerd[646]: time="2025-03-17T12:33:01.330353567Z" level=info msg="Starting up"
	Mar 17 12:33:01 multinode-781100 dockerd[646]: time="2025-03-17T12:33:01.332500904Z" level=info msg="containerd not running, starting managed containerd"
	Mar 17 12:33:01 multinode-781100 dockerd[646]: time="2025-03-17T12:33:01.333761084Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=652
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.366201452Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.393353482Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.393395585Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.393460289Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.393477790Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.394137932Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.394242639Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.394391948Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.394484954Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.394506055Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.394517456Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.395142696Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.396159161Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.399176253Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.399290960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.399422469Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.399513375Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.400285824Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.400459035Z" level=info msg="metadata content store policy set" policy=shared
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.408116723Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.408204828Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.408227130Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.408249831Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.408267032Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.408336337Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.408824668Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409235094Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409344201Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409371803Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409387904Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409401505Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409414706Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409439007Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409456508Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409469809Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409482410Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409494311Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409514612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409529913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409548014Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409561515Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409574016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409586717Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409598417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409669122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409707224Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409742026Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409754627Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409766928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409781029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409797430Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409819031Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409833732Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409845433Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409982742Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.410016344Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.410049146Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.410066147Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.410166353Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.410195455Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.410209756Z" level=info msg="NRI interface is disabled by configuration."
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.410419170Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.410577580Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.410633383Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.410676986Z" level=info msg="containerd successfully booted in 0.047393s"
	Mar 17 12:33:02 multinode-781100 dockerd[646]: time="2025-03-17T12:33:02.392455186Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Mar 17 12:33:02 multinode-781100 dockerd[646]: time="2025-03-17T12:33:02.617246117Z" level=info msg="Loading containers: start."
	Mar 17 12:33:02 multinode-781100 dockerd[646]: time="2025-03-17T12:33:02.934372964Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Mar 17 12:33:03 multinode-781100 dockerd[646]: time="2025-03-17T12:33:03.092163796Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Mar 17 12:33:03 multinode-781100 dockerd[646]: time="2025-03-17T12:33:03.210189769Z" level=info msg="Loading containers: done."
	Mar 17 12:33:03 multinode-781100 dockerd[646]: time="2025-03-17T12:33:03.236346158Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Mar 17 12:33:03 multinode-781100 dockerd[646]: time="2025-03-17T12:33:03.236465667Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Mar 17 12:33:03 multinode-781100 dockerd[646]: time="2025-03-17T12:33:03.236494969Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	Mar 17 12:33:03 multinode-781100 dockerd[646]: time="2025-03-17T12:33:03.237280029Z" level=info msg="Daemon has completed initialization"
	Mar 17 12:33:03 multinode-781100 dockerd[646]: time="2025-03-17T12:33:03.280313201Z" level=info msg="API listen on /var/run/docker.sock"
	Mar 17 12:33:03 multinode-781100 systemd[1]: Started Docker Application Container Engine.
	Mar 17 12:33:03 multinode-781100 dockerd[646]: time="2025-03-17T12:33:03.281066158Z" level=info msg="API listen on [::]:2376"
	Mar 17 12:33:29 multinode-781100 systemd[1]: Stopping Docker Application Container Engine...
	Mar 17 12:33:29 multinode-781100 dockerd[646]: time="2025-03-17T12:33:29.831988134Z" level=info msg="Processing signal 'terminated'"
	Mar 17 12:33:29 multinode-781100 dockerd[646]: time="2025-03-17T12:33:29.834398652Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Mar 17 12:33:29 multinode-781100 dockerd[646]: time="2025-03-17T12:33:29.834580253Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Mar 17 12:33:29 multinode-781100 dockerd[646]: time="2025-03-17T12:33:29.835000556Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Mar 17 12:33:29 multinode-781100 dockerd[646]: time="2025-03-17T12:33:29.834619553Z" level=info msg="Daemon shutdown complete"
	Mar 17 12:33:30 multinode-781100 systemd[1]: docker.service: Deactivated successfully.
	Mar 17 12:33:30 multinode-781100 systemd[1]: Stopped Docker Application Container Engine.
	Mar 17 12:33:30 multinode-781100 systemd[1]: Starting Docker Application Container Engine...
	Mar 17 12:33:30 multinode-781100 dockerd[1095]: time="2025-03-17T12:33:30.890408016Z" level=info msg="Starting up"
	Mar 17 12:34:30 multinode-781100 dockerd[1095]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Mar 17 12:34:30 multinode-781100 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Mar 17 12:34:30 multinode-781100 systemd[1]: docker.service: Failed with result 'exit-code'.
	Mar 17 12:34:30 multinode-781100 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Mar 17 12:33:01 multinode-781100 systemd[1]: Starting Docker Application Container Engine...
	Mar 17 12:33:01 multinode-781100 dockerd[646]: time="2025-03-17T12:33:01.330353567Z" level=info msg="Starting up"
	Mar 17 12:33:01 multinode-781100 dockerd[646]: time="2025-03-17T12:33:01.332500904Z" level=info msg="containerd not running, starting managed containerd"
	Mar 17 12:33:01 multinode-781100 dockerd[646]: time="2025-03-17T12:33:01.333761084Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=652
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.366201452Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.393353482Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.393395585Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.393460289Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.393477790Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.394137932Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.394242639Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.394391948Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.394484954Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.394506055Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.394517456Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.395142696Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.396159161Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.399176253Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.399290960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.399422469Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.399513375Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.400285824Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.400459035Z" level=info msg="metadata content store policy set" policy=shared
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.408116723Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.408204828Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.408227130Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.408249831Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.408267032Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.408336337Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.408824668Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409235094Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409344201Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409371803Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409387904Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409401505Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409414706Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409439007Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409456508Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409469809Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409482410Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409494311Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409514612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409529913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409548014Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409561515Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409574016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409586717Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409598417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409669122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409707224Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409742026Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409754627Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409766928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409781029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409797430Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409819031Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409833732Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409845433Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.409982742Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.410016344Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.410049146Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.410066147Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.410166353Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.410195455Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.410209756Z" level=info msg="NRI interface is disabled by configuration."
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.410419170Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.410577580Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.410633383Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Mar 17 12:33:01 multinode-781100 dockerd[652]: time="2025-03-17T12:33:01.410676986Z" level=info msg="containerd successfully booted in 0.047393s"
	Mar 17 12:33:02 multinode-781100 dockerd[646]: time="2025-03-17T12:33:02.392455186Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Mar 17 12:33:02 multinode-781100 dockerd[646]: time="2025-03-17T12:33:02.617246117Z" level=info msg="Loading containers: start."
	Mar 17 12:33:02 multinode-781100 dockerd[646]: time="2025-03-17T12:33:02.934372964Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Mar 17 12:33:03 multinode-781100 dockerd[646]: time="2025-03-17T12:33:03.092163796Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Mar 17 12:33:03 multinode-781100 dockerd[646]: time="2025-03-17T12:33:03.210189769Z" level=info msg="Loading containers: done."
	Mar 17 12:33:03 multinode-781100 dockerd[646]: time="2025-03-17T12:33:03.236346158Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Mar 17 12:33:03 multinode-781100 dockerd[646]: time="2025-03-17T12:33:03.236465667Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Mar 17 12:33:03 multinode-781100 dockerd[646]: time="2025-03-17T12:33:03.236494969Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	Mar 17 12:33:03 multinode-781100 dockerd[646]: time="2025-03-17T12:33:03.237280029Z" level=info msg="Daemon has completed initialization"
	Mar 17 12:33:03 multinode-781100 dockerd[646]: time="2025-03-17T12:33:03.280313201Z" level=info msg="API listen on /var/run/docker.sock"
	Mar 17 12:33:03 multinode-781100 systemd[1]: Started Docker Application Container Engine.
	Mar 17 12:33:03 multinode-781100 dockerd[646]: time="2025-03-17T12:33:03.281066158Z" level=info msg="API listen on [::]:2376"
	Mar 17 12:33:29 multinode-781100 systemd[1]: Stopping Docker Application Container Engine...
	Mar 17 12:33:29 multinode-781100 dockerd[646]: time="2025-03-17T12:33:29.831988134Z" level=info msg="Processing signal 'terminated'"
	Mar 17 12:33:29 multinode-781100 dockerd[646]: time="2025-03-17T12:33:29.834398652Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Mar 17 12:33:29 multinode-781100 dockerd[646]: time="2025-03-17T12:33:29.834580253Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Mar 17 12:33:29 multinode-781100 dockerd[646]: time="2025-03-17T12:33:29.835000556Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Mar 17 12:33:29 multinode-781100 dockerd[646]: time="2025-03-17T12:33:29.834619553Z" level=info msg="Daemon shutdown complete"
	Mar 17 12:33:30 multinode-781100 systemd[1]: docker.service: Deactivated successfully.
	Mar 17 12:33:30 multinode-781100 systemd[1]: Stopped Docker Application Container Engine.
	Mar 17 12:33:30 multinode-781100 systemd[1]: Starting Docker Application Container Engine...
	Mar 17 12:33:30 multinode-781100 dockerd[1095]: time="2025-03-17T12:33:30.890408016Z" level=info msg="Starting up"
	Mar 17 12:34:30 multinode-781100 dockerd[1095]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Mar 17 12:34:30 multinode-781100 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Mar 17 12:34:30 multinode-781100 systemd[1]: docker.service: Failed with result 'exit-code'.
	Mar 17 12:34:30 multinode-781100 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0317 12:34:30.938845    2224 out.go:270] * 
	* 
	W0317 12:34:30.939828    2224 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0317 12:34:30.947009    2224 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-windows-amd64.exe node list -p multinode-781100" : exit status 90
multinode_test.go:331: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-781100
multinode_test.go:338: reported node list is not the same after restart. Before restart: multinode-781100	172.25.16.124
multinode-781100-m02	172.25.25.119
multinode-781100-m03	172.25.16.223

                                                
                                                
After restart: multinode-781100	172.25.16.109
multinode-781100-m02	172.25.25.119
multinode-781100-m03	172.25.16.223
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-781100 -n multinode-781100
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-781100 -n multinode-781100: exit status 6 (12.1267771s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0317 12:34:43.477074    1772 status.go:458] kubeconfig endpoint: get endpoint: "multinode-781100" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-781100" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (291.00s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (35.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-781100 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-781100 node delete m03: exit status 103 (7.1149095s)

                                                
                                                
-- stdout --
	* The control-plane node multinode-781100 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p multinode-781100"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-windows-amd64.exe -p multinode-781100 node delete m03": exit status 103
multinode_test.go:422: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-781100 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-781100 status --alsologtostderr: exit status 7 (16.1082355s)

                                                
                                                
-- stdout --
	multinode-781100
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`
	multinode-781100-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	
	multinode-781100-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0317 12:34:50.769209    5740 out.go:345] Setting OutFile to fd 1800 ...
	I0317 12:34:50.840082    5740 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 12:34:50.840082    5740 out.go:358] Setting ErrFile to fd 1692...
	I0317 12:34:50.840082    5740 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 12:34:50.853079    5740 out.go:352] Setting JSON to false
	I0317 12:34:50.853079    5740 mustload.go:65] Loading cluster: multinode-781100
	I0317 12:34:50.853079    5740 notify.go:220] Checking for updates...
	I0317 12:34:50.854081    5740 config.go:182] Loaded profile config "multinode-781100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0317 12:34:50.854081    5740 status.go:174] checking status of multinode-781100 ...
	I0317 12:34:50.855088    5740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100 ).state
	I0317 12:34:53.022275    5740 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:34:53.022275    5740 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:34:53.022275    5740 status.go:371] multinode-781100 host status = "Running" (err=<nil>)
	I0317 12:34:53.022275    5740 host.go:66] Checking if "multinode-781100" exists ...
	I0317 12:34:53.023564    5740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100 ).state
	I0317 12:34:55.163206    5740 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:34:55.163306    5740 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:34:55.163440    5740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100 ).networkadapters[0]).ipaddresses[0]
	I0317 12:34:57.730651    5740 main.go:141] libmachine: [stdout =====>] : 172.25.16.109
	
	I0317 12:34:57.730779    5740 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:34:57.730779    5740 host.go:66] Checking if "multinode-781100" exists ...
	I0317 12:34:57.745585    5740 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0317 12:34:57.745690    5740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100 ).state
	I0317 12:34:59.838744    5740 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:34:59.839743    5740 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:34:59.839827    5740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100 ).networkadapters[0]).ipaddresses[0]
	I0317 12:35:02.342796    5740 main.go:141] libmachine: [stdout =====>] : 172.25.16.109
	
	I0317 12:35:02.342796    5740 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:35:02.342796    5740 sshutil.go:53] new ssh client: &{IP:172.25.16.109 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-781100\id_rsa Username:docker}
	I0317 12:35:02.446776    5740 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.7010346s)
	I0317 12:35:02.461158    5740 ssh_runner.go:195] Run: systemctl --version
	I0317 12:35:02.481648    5740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	E0317 12:35:02.506968    5740 status.go:458] kubeconfig endpoint: get endpoint: "multinode-781100" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0317 12:35:02.507055    5740 api_server.go:166] Checking apiserver status ...
	I0317 12:35:02.518648    5740 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0317 12:35:02.545047    5740 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0317 12:35:02.545047    5740 status.go:463] multinode-781100 apiserver status = Stopped (err=<nil>)
	I0317 12:35:02.545047    5740 status.go:176] multinode-781100 status: &{Name:multinode-781100 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0317 12:35:02.545047    5740 status.go:174] checking status of multinode-781100-m02 ...
	I0317 12:35:02.545765    5740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100-m02 ).state
	I0317 12:35:04.622922    5740 main.go:141] libmachine: [stdout =====>] : Off
	
	I0317 12:35:04.622922    5740 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:35:04.622922    5740 status.go:371] multinode-781100-m02 host status = "Stopped" (err=<nil>)
	I0317 12:35:04.623943    5740 status.go:384] host is not running, skipping remaining checks
	I0317 12:35:04.623943    5740 status.go:176] multinode-781100-m02 status: &{Name:multinode-781100-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0317 12:35:04.623943    5740 status.go:174] checking status of multinode-781100-m03 ...
	I0317 12:35:04.624834    5740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100-m03 ).state
	I0317 12:35:06.740129    5740 main.go:141] libmachine: [stdout =====>] : Off
	
	I0317 12:35:06.740129    5740 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:35:06.740984    5740 status.go:371] multinode-781100-m03 host status = "Stopped" (err=<nil>)
	I0317 12:35:06.740984    5740 status.go:384] host is not running, skipping remaining checks
	I0317 12:35:06.740984    5740 status.go:176] multinode-781100-m03 status: &{Name:multinode-781100-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-windows-amd64.exe -p multinode-781100 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-781100 -n multinode-781100
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-781100 -n multinode-781100: exit status 6 (12.0851975s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0317 12:35:18.792177   10832 status.go:458] kubeconfig endpoint: get endpoint: "multinode-781100" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-781100" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/DeleteNode (35.31s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (54.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-781100 stop
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-781100 stop: exit status 1 (39.8693027s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-781100-m03"  ...
	* Stopping node "multinode-781100-m02"  ...
	* Stopping node "multinode-781100"  ...
	* Powering off "multinode-781100" via SSH ...

                                                
                                                
-- /stdout --
multinode_test.go:347: failed to stop cluster. args "out/minikube-windows-amd64.exe -p multinode-781100 stop": exit status 1
multinode_test.go:351: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-781100 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-781100 status: context deadline exceeded (91.2µs)
multinode_test.go:354: failed to run minikube status. args "out/minikube-windows-amd64.exe -p multinode-781100 status" : context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-781100 -n multinode-781100
E0317 12:36:12.726609    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-758100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-781100 -n multinode-781100: exit status 7 (14.4252956s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0317 12:36:13.124795    2352 status.go:393] failed to get driver ip: getting IP: IP not found
	E0317 12:36:13.124795    2352 status.go:119] status error: getting IP: IP not found

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-781100" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiNode/serial/StopMultiNode (54.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (302.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-183300 --driver=hyperv
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-183300 --driver=hyperv: exit status 1 (4m59.6599672s)

                                                
                                                
-- stdout --
	* [NoKubernetes-183300] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5608 Build 19045.5608
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=20535
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "NoKubernetes-183300" primary control-plane node in "NoKubernetes-183300" cluster
	* Creating hyperv VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...

                                                
                                                
-- /stdout --
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-183300 --driver=hyperv" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-183300 -n NoKubernetes-183300
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-183300 -n NoKubernetes-183300: exit status 7 (3.0241614s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0317 12:56:34.446047    6908 main.go:137] libmachine: [stderr =====>] : Hyper-V\Get-VM : Hyper-V was unable to find a virtual machine with name "NoKubernetes-183300".
	At line:1 char:3
	+ ( Hyper-V\Get-VM NoKubernetes-183300 ).state
	+   ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
	    + CategoryInfo          : InvalidArgument: (NoKubernetes-183300:String) [Get-VM], VirtualizationException
	    + FullyQualifiedErrorId : InvalidParameter,Microsoft.HyperV.PowerShell.Commands.GetVM
	 
	

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-183300" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (302.68s)

                                                
                                    
x
+
TestPause/serial/Unpause (103.5s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p pause-471400 --alsologtostderr -v=5
pause_test.go:121: (dbg) Non-zero exit: out/minikube-windows-amd64.exe unpause -p pause-471400 --alsologtostderr -v=5: exit status 1 (6.0110646s)

                                                
                                                
-- stdout --
	* Unpausing node pause-471400 ... 

                                                
                                                
-- /stdout --
** stderr ** 
	I0317 13:18:00.161474    5528 out.go:345] Setting OutFile to fd 1408 ...
	I0317 13:18:00.251741    5528 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:18:00.251741    5528 out.go:358] Setting ErrFile to fd 1436...
	I0317 13:18:00.252792    5528 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:18:00.269340    5528 mustload.go:65] Loading cluster: pause-471400
	I0317 13:18:00.271000    5528 config.go:182] Loaded profile config "pause-471400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0317 13:18:00.272139    5528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-471400 ).state
	I0317 13:18:02.578133    5528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:18:02.578133    5528 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:18:02.578133    5528 host.go:66] Checking if "pause-471400" exists ...
	I0317 13:18:02.578962    5528 out.go:352] Setting JSON to false
	I0317 13:18:02.579545    5528 unpause.go:53] namespaces: [kube-system kubernetes-dashboard storage-gluster istio-operator] keys: map[addons:[] all:false apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:8443 auto-pause-interval:1m0s auto-update-drivers:true base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 binary-mirror: bootstrapper:kubeadm cache-images:true cancel-scheduled:false cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:false disable-driver-mounts:false disable-metrics:false disable-optimizations:false disk-size:20000mb dns-domain:cluster.local dns-proxy:false docker-env:[] docker-opt:[] download-only:false driver: dry-run:false embed-certs:false embedcerts:false enable-default-cni:false extra-config: extra-disks:0 feature-gates: force:false force-systemd:false gpus: ha:false host-dns-resolver:true host-only-cidr:192.168.59.1/24 host-
only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:false hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:true interactive:true iso-url:[https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.35.0/minikube-v1.35.0-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.35.0-amd64.iso] keep-context:false keep-context-active:false kubernetes-version: kvm-gpu:false kvm-hidden:false kvm-network:default kvm-numa-count:1 kvm-qemu-uri:qemu:///system listen-address: maxauditentries:1000 memory: mount:false mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:262144 mount-options:[] mount-port:0 mount-string:C:\Users\jenkins.minikube6:/minikube-host mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:true network: network-plugin: nfs-share:[] nfs-shares-root:/nf
sshares no-kubernetes:false no-vtx-check:false nodes:1 output:text ports:[] preload:true profile:pause-471400 purge:false qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:24 rootless:false schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:false socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:22 ssh-user:root static-ip: subnet: trace: user: uuid: vm:false vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:true wantupdatenotification:true wantvirtualboxdriverwarning:true]
	I0317 13:18:02.580212    5528 unpause.go:65] node: {Name: IP:172.25.31.3 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0317 13:18:02.586164    5528 out.go:177] * Unpausing node pause-471400 ... 
	I0317 13:18:02.592702    5528 host.go:66] Checking if "pause-471400" exists ...
	I0317 13:18:02.605184    5528 ssh_runner.go:195] Run: systemctl --version
	I0317 13:18:02.605184    5528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-471400 ).state
	I0317 13:18:04.900451    5528 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:18:04.900451    5528 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:18:04.900556    5528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-471400 ).networkadapters[0]).ipaddresses[0]

                                                
                                                
** /stderr **
pause_test.go:123: failed to unpause minikube with args: "out/minikube-windows-amd64.exe unpause -p pause-471400 --alsologtostderr -v=5" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-471400 -n pause-471400
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-471400 -n pause-471400: exit status 2 (13.1844855s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestPause/serial/Unpause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/Unpause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p pause-471400 logs -n 25
E0317 13:18:37.901680    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-331000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p pause-471400 logs -n 25: (19.6735311s)
helpers_test.go:252: TestPause/serial/Unpause logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | -p cilium-841900 sudo find     | cilium-841900             | minikube6\jenkins | v1.35.0 | 17 Mar 25 12:55 UTC |                     |
	|         | /etc/crio -type f -exec sh -c  |                           |                   |         |                     |                     |
	|         | 'echo {}; cat {}' \;           |                           |                   |         |                     |                     |
	| ssh     | -p cilium-841900 sudo crio     | cilium-841900             | minikube6\jenkins | v1.35.0 | 17 Mar 25 12:55 UTC |                     |
	|         | config                         |                           |                   |         |                     |                     |
	| delete  | -p cilium-841900               | cilium-841900             | minikube6\jenkins | v1.35.0 | 17 Mar 25 12:55 UTC | 17 Mar 25 12:55 UTC |
	| start   | -p force-systemd-env-265000    | force-systemd-env-265000  | minikube6\jenkins | v1.35.0 | 17 Mar 25 12:55 UTC | 17 Mar 25 13:02 UTC |
	|         | --memory=2048                  |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=5         |                           |                   |         |                     |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| delete  | -p NoKubernetes-183300         | NoKubernetes-183300       | minikube6\jenkins | v1.35.0 | 17 Mar 25 12:56 UTC | 17 Mar 25 12:56 UTC |
	| start   | -p kubernetes-upgrade-816300   | kubernetes-upgrade-816300 | minikube6\jenkins | v1.35.0 | 17 Mar 25 12:56 UTC | 17 Mar 25 13:04 UTC |
	|         | --memory=2200                  |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |                   |         |                     |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| delete  | -p offline-docker-183300       | offline-docker-183300     | minikube6\jenkins | v1.35.0 | 17 Mar 25 12:57 UTC | 17 Mar 25 12:58 UTC |
	| start   | -p stopped-upgrade-112300      | minikube                  | minikube6\jenkins | v1.26.0 | 17 Mar 25 12:58 GMT | 17 Mar 25 13:07 GMT |
	|         | --memory=2200                  |                           |                   |         |                     |                     |
	|         | --vm-driver=hyperv             |                           |                   |         |                     |                     |
	| start   | -p running-upgrade-374500      | running-upgrade-374500    | minikube6\jenkins | v1.35.0 | 17 Mar 25 12:59 UTC | 17 Mar 25 13:08 UTC |
	|         | --memory=2200                  |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |                   |         |                     |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| ssh     | force-systemd-env-265000       | force-systemd-env-265000  | minikube6\jenkins | v1.35.0 | 17 Mar 25 13:02 UTC | 17 Mar 25 13:02 UTC |
	|         | ssh docker info --format       |                           |                   |         |                     |                     |
	|         | {{.CgroupDriver}}              |                           |                   |         |                     |                     |
	| delete  | -p force-systemd-env-265000    | force-systemd-env-265000  | minikube6\jenkins | v1.35.0 | 17 Mar 25 13:02 UTC | 17 Mar 25 13:03 UTC |
	| start   | -p pause-471400 --memory=2048  | pause-471400              | minikube6\jenkins | v1.35.0 | 17 Mar 25 13:03 UTC | 17 Mar 25 13:11 UTC |
	|         | --install-addons=false         |                           |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv     |                           |                   |         |                     |                     |
	| stop    | -p kubernetes-upgrade-816300   | kubernetes-upgrade-816300 | minikube6\jenkins | v1.35.0 | 17 Mar 25 13:04 UTC | 17 Mar 25 13:05 UTC |
	| start   | -p kubernetes-upgrade-816300   | kubernetes-upgrade-816300 | minikube6\jenkins | v1.35.0 | 17 Mar 25 13:05 UTC | 17 Mar 25 13:12 UTC |
	|         | --memory=2200                  |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.32.2   |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |                   |         |                     |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| stop    | stopped-upgrade-112300 stop    | minikube                  | minikube6\jenkins | v1.26.0 | 17 Mar 25 13:07 GMT | 17 Mar 25 13:07 GMT |
	| start   | -p stopped-upgrade-112300      | stopped-upgrade-112300    | minikube6\jenkins | v1.35.0 | 17 Mar 25 13:07 UTC | 17 Mar 25 13:14 UTC |
	|         | --memory=2200                  |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |                   |         |                     |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| delete  | -p running-upgrade-374500      | running-upgrade-374500    | minikube6\jenkins | v1.35.0 | 17 Mar 25 13:08 UTC | 17 Mar 25 13:09 UTC |
	| start   | -p cert-expiration-735200      | cert-expiration-735200    | minikube6\jenkins | v1.35.0 | 17 Mar 25 13:09 UTC | 17 Mar 25 13:16 UTC |
	|         | --memory=2048                  |                           |                   |         |                     |                     |
	|         | --cert-expiration=3m           |                           |                   |         |                     |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| start   | -p pause-471400                | pause-471400              | minikube6\jenkins | v1.35.0 | 17 Mar 25 13:11 UTC | 17 Mar 25 13:17 UTC |
	|         | --alsologtostderr -v=1         |                           |                   |         |                     |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| start   | -p kubernetes-upgrade-816300   | kubernetes-upgrade-816300 | minikube6\jenkins | v1.35.0 | 17 Mar 25 13:12 UTC |                     |
	|         | --memory=2200                  |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                           |                   |         |                     |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| start   | -p kubernetes-upgrade-816300   | kubernetes-upgrade-816300 | minikube6\jenkins | v1.35.0 | 17 Mar 25 13:12 UTC |                     |
	|         | --memory=2200                  |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.32.2   |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |                   |         |                     |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| delete  | -p stopped-upgrade-112300      | stopped-upgrade-112300    | minikube6\jenkins | v1.35.0 | 17 Mar 25 13:14 UTC | 17 Mar 25 13:15 UTC |
	| start   | -p docker-flags-664100         | docker-flags-664100       | minikube6\jenkins | v1.35.0 | 17 Mar 25 13:15 UTC |                     |
	|         | --cache-images=false           |                           |                   |         |                     |                     |
	|         | --memory=2048                  |                           |                   |         |                     |                     |
	|         | --install-addons=false         |                           |                   |         |                     |                     |
	|         | --wait=false                   |                           |                   |         |                     |                     |
	|         | --docker-env=FOO=BAR           |                           |                   |         |                     |                     |
	|         | --docker-env=BAZ=BAT           |                           |                   |         |                     |                     |
	|         | --docker-opt=debug             |                           |                   |         |                     |                     |
	|         | --docker-opt=icc=true          |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=5         |                           |                   |         |                     |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| pause   | -p pause-471400                | pause-471400              | minikube6\jenkins | v1.35.0 | 17 Mar 25 13:17 UTC | 17 Mar 25 13:17 UTC |
	|         | --alsologtostderr -v=5         |                           |                   |         |                     |                     |
	| unpause | -p pause-471400                | pause-471400              | minikube6\jenkins | v1.35.0 | 17 Mar 25 13:18 UTC |                     |
	|         | --alsologtostderr -v=5         |                           |                   |         |                     |                     |
	|---------|--------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/03/17 13:15:13
	Running on machine: minikube6
	Binary: Built with gc go1.24.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0317 13:15:13.663151    7220 out.go:345] Setting OutFile to fd 1536 ...
	I0317 13:15:13.749602    7220 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:15:13.749602    7220 out.go:358] Setting ErrFile to fd 1652...
	I0317 13:15:13.749602    7220 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:15:13.773383    7220 out.go:352] Setting JSON to false
	I0317 13:15:13.776727    7220 start.go:129] hostinfo: {"hostname":"minikube6","uptime":11090,"bootTime":1742206223,"procs":193,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5608 Build 19045.5608","kernelVersion":"10.0.19045.5608 Build 19045.5608","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0317 13:15:13.776727    7220 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0317 13:15:13.787047    7220 out.go:177] * [docker-flags-664100] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5608 Build 19045.5608
	I0317 13:15:13.793247    7220 notify.go:220] Checking for updates...
	I0317 13:15:13.794481    7220 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0317 13:15:13.797479    7220 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0317 13:15:13.800167    7220 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0317 13:15:13.803213    7220 out.go:177]   - MINIKUBE_LOCATION=20535
	I0317 13:15:13.805414    7220 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0317 13:15:14.126619   10084 main.go:141] libmachine: [stdout =====>] : 172.25.26.33
	
	I0317 13:15:14.127674   10084 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:15:14.133802   10084 main.go:141] libmachine: Using SSH client type: native
	I0317 13:15:14.133957   10084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.26.33 22 <nil> <nil>}
	I0317 13:15:14.133957   10084 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0317 13:15:16.552934   10084 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0317 13:15:16.552934   10084 machine.go:96] duration metric: took 48.9495166s to provisionDockerMachine
	I0317 13:15:16.552934   10084 client.go:171] duration metric: took 2m8.1588137s to LocalClient.Create
	I0317 13:15:16.553011   10084 start.go:167] duration metric: took 2m8.1590376s to libmachine.API.Create "cert-expiration-735200"
	I0317 13:15:16.553011   10084 start.go:293] postStartSetup for "cert-expiration-735200" (driver="hyperv")
	I0317 13:15:16.553011   10084 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0317 13:15:16.570328   10084 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0317 13:15:16.570328   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-735200 ).state
	I0317 13:15:13.809618    7220 config.go:182] Loaded profile config "cert-expiration-735200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0317 13:15:13.810088    7220 config.go:182] Loaded profile config "ha-450500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0317 13:15:13.810785    7220 config.go:182] Loaded profile config "kubernetes-upgrade-816300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0317 13:15:13.811208    7220 config.go:182] Loaded profile config "pause-471400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0317 13:15:13.811208    7220 driver.go:394] Setting default libvirt URI to qemu:///system
	I0317 13:15:19.518826    7220 out.go:177] * Using the hyperv driver based on user configuration
	I0317 13:15:19.522716    7220 start.go:297] selected driver: hyperv
	I0317 13:15:19.522716    7220 start.go:901] validating driver "hyperv" against <nil>
	I0317 13:15:19.522716    7220 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0317 13:15:19.578528    7220 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0317 13:15:19.579283    7220 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0317 13:15:19.580341    7220 cni.go:84] Creating CNI manager for ""
	I0317 13:15:19.580341    7220 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0317 13:15:19.580341    7220 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0317 13:15:19.580341    7220 start.go:340] cluster config:
	{Name:docker-flags-664100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:docker-flags-664100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 13:15:19.580341    7220 iso.go:125] acquiring lock: {Name:mk49322cc4182124f5e9cd1631076166a7ff463d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0317 13:15:19.587799    7220 out.go:177] * Starting "docker-flags-664100" primary control-plane node in "docker-flags-664100" cluster
	I0317 13:15:19.443094   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:15:19.443094   10084 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:15:19.444097   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-735200 ).networkadapters[0]).ipaddresses[0]
	I0317 13:15:19.590655    7220 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0317 13:15:19.590879    7220 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4
	I0317 13:15:19.590935    7220 cache.go:56] Caching tarball of preloaded images
	I0317 13:15:19.591271    7220 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0317 13:15:19.591524    7220 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on docker
	I0317 13:15:19.591848    7220 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\docker-flags-664100\config.json ...
	I0317 13:15:19.592309    7220 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\docker-flags-664100\config.json: {Name:mk8f96dcd7109b2db4c71e9e8573ce48dccde009 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:15:19.594098    7220 start.go:360] acquireMachinesLock for docker-flags-664100: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0317 13:15:22.704276   10084 main.go:141] libmachine: [stdout =====>] : 172.25.26.33
	
	I0317 13:15:22.704276   10084 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:15:22.704276   10084 sshutil.go:53] new ssh client: &{IP:172.25.26.33 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\cert-expiration-735200\id_rsa Username:docker}
	I0317 13:15:22.822253   10084 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (6.2518556s)
	I0317 13:15:22.833211   10084 ssh_runner.go:195] Run: cat /etc/os-release
	I0317 13:15:22.841540   10084 info.go:137] Remote host: Buildroot 2023.02.9
	I0317 13:15:22.841540   10084 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0317 13:15:22.842144   10084 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0317 13:15:22.843104   10084 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem -> 89402.pem in /etc/ssl/certs
	I0317 13:15:22.856913   10084 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0317 13:15:22.879477   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem --> /etc/ssl/certs/89402.pem (1708 bytes)
	I0317 13:15:22.928697   10084 start.go:296] duration metric: took 6.3756152s for postStartSetup
	I0317 13:15:22.931199   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-735200 ).state
	I0317 13:15:25.176095   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:15:25.176095   10084 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:15:25.176244   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-735200 ).networkadapters[0]).ipaddresses[0]
	I0317 13:15:27.790826   10084 main.go:141] libmachine: [stdout =====>] : 172.25.26.33
	
	I0317 13:15:27.790826   10084 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:15:27.791238   10084 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\cert-expiration-735200\config.json ...
	I0317 13:15:27.794633   10084 start.go:128] duration metric: took 2m19.4051263s to createHost
	I0317 13:15:27.794710   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-735200 ).state
	I0317 13:15:29.958882   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:15:29.959504   10084 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:15:29.959589   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-735200 ).networkadapters[0]).ipaddresses[0]
	I0317 13:15:32.572165   10084 main.go:141] libmachine: [stdout =====>] : 172.25.26.33
	
	I0317 13:15:32.572165   10084 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:15:32.577161   10084 main.go:141] libmachine: Using SSH client type: native
	I0317 13:15:32.577161   10084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.26.33 22 <nil> <nil>}
	I0317 13:15:32.577161   10084 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0317 13:15:32.715225   10084 main.go:141] libmachine: SSH cmd err, output: <nil>: 1742217332.739100495
	
	I0317 13:15:32.715225   10084 fix.go:216] guest clock: 1742217332.739100495
	I0317 13:15:32.715300   10084 fix.go:229] Guest: 2025-03-17 13:15:32.739100495 +0000 UTC Remote: 2025-03-17 13:15:27.7946334 +0000 UTC m=+340.785027101 (delta=4.944467095s)
	I0317 13:15:32.715300   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-735200 ).state
	I0317 13:15:34.862568   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:15:34.863564   10084 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:15:34.863564   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-735200 ).networkadapters[0]).ipaddresses[0]
	I0317 13:15:37.678778    2276 start.go:364] duration metric: took 4m19.9205964s to acquireMachinesLock for "pause-471400"
	I0317 13:15:37.679677    2276 start.go:96] Skipping create...Using existing machine configuration
	I0317 13:15:37.679677    2276 fix.go:54] fixHost starting: 
	I0317 13:15:37.680453    2276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-471400 ).state
	I0317 13:15:40.023699    2276 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:15:40.023699    2276 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:15:40.023699    2276 fix.go:112] recreateIfNeeded on pause-471400: state=Running err=<nil>
	W0317 13:15:40.023699    2276 fix.go:138] unexpected machine state, will restart: <nil>
	I0317 13:15:40.030166    2276 out.go:177] * Updating the running hyperv "pause-471400" VM ...
	I0317 13:15:40.032400    2276 machine.go:93] provisionDockerMachine start ...
	I0317 13:15:40.032400    2276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-471400 ).state
	I0317 13:15:37.515882   10084 main.go:141] libmachine: [stdout =====>] : 172.25.26.33
	
	I0317 13:15:37.515992   10084 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:15:37.520574   10084 main.go:141] libmachine: Using SSH client type: native
	I0317 13:15:37.521404   10084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.26.33 22 <nil> <nil>}
	I0317 13:15:37.521404   10084 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1742217332
	I0317 13:15:37.678778   10084 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Mar 17 13:15:32 UTC 2025
	
	I0317 13:15:37.678778   10084 fix.go:236] clock set: Mon Mar 17 13:15:32 UTC 2025
	 (err=<nil>)
	I0317 13:15:37.678778   10084 start.go:83] releasing machines lock for "cert-expiration-735200", held for 2m29.2895158s
	I0317 13:15:37.678778   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-735200 ).state
	I0317 13:15:40.004837   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:15:40.004837   10084 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:15:40.005539   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-735200 ).networkadapters[0]).ipaddresses[0]
	I0317 13:15:42.333417    2276 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:15:42.333417    2276 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:15:42.333417    2276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-471400 ).networkadapters[0]).ipaddresses[0]
	I0317 13:15:45.093520    2276 main.go:141] libmachine: [stdout =====>] : 172.25.31.3
	
	I0317 13:15:45.093520    2276 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:15:45.100127    2276 main.go:141] libmachine: Using SSH client type: native
	I0317 13:15:45.100583    2276 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.31.3 22 <nil> <nil>}
	I0317 13:15:45.100583    2276 main.go:141] libmachine: About to run SSH command:
	hostname
	I0317 13:15:45.245898    2276 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-471400
	
	I0317 13:15:45.245898    2276 buildroot.go:166] provisioning hostname "pause-471400"
	I0317 13:15:45.246037    2276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-471400 ).state
	I0317 13:15:42.761089   10084 main.go:141] libmachine: [stdout =====>] : 172.25.26.33
	
	I0317 13:15:42.761089   10084 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:15:42.769029   10084 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0317 13:15:42.769029   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-735200 ).state
	I0317 13:15:42.781633   10084 ssh_runner.go:195] Run: cat /version.json
	I0317 13:15:42.781633   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-735200 ).state
	I0317 13:15:45.131894   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:15:45.131894   10084 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:15:45.131894   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:15:45.131894   10084 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:15:45.131894   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-735200 ).networkadapters[0]).ipaddresses[0]
	I0317 13:15:45.131894   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-735200 ).networkadapters[0]).ipaddresses[0]
	I0317 13:15:47.643913    2276 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:15:47.644126    2276 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:15:47.644225    2276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-471400 ).networkadapters[0]).ipaddresses[0]
	I0317 13:15:50.330442    2276 main.go:141] libmachine: [stdout =====>] : 172.25.31.3
	
	I0317 13:15:50.330442    2276 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:15:50.342292    2276 main.go:141] libmachine: Using SSH client type: native
	I0317 13:15:50.343069    2276 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.31.3 22 <nil> <nil>}
	I0317 13:15:50.343069    2276 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-471400 && echo "pause-471400" | sudo tee /etc/hostname
	I0317 13:15:50.518938    2276 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-471400
	
	I0317 13:15:50.518971    2276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-471400 ).state
	I0317 13:15:48.025192   10084 main.go:141] libmachine: [stdout =====>] : 172.25.26.33
	
	I0317 13:15:48.025192   10084 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:15:48.025408   10084 sshutil.go:53] new ssh client: &{IP:172.25.26.33 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\cert-expiration-735200\id_rsa Username:docker}
	I0317 13:15:48.051725   10084 main.go:141] libmachine: [stdout =====>] : 172.25.26.33
	
	I0317 13:15:48.051725   10084 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:15:48.052187   10084 sshutil.go:53] new ssh client: &{IP:172.25.26.33 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\cert-expiration-735200\id_rsa Username:docker}
	I0317 13:15:48.121601   10084 ssh_runner.go:235] Completed: cat /version.json: (5.3399087s)
	I0317 13:15:48.133722   10084 ssh_runner.go:195] Run: systemctl --version
	I0317 13:15:48.139761   10084 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.3705678s)
	W0317 13:15:48.139761   10084 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0317 13:15:48.158648   10084 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0317 13:15:48.169279   10084 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0317 13:15:48.182383   10084 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0317 13:15:48.215174   10084 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0317 13:15:48.215327   10084 start.go:495] detecting cgroup driver to use...
	I0317 13:15:48.215392   10084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0317 13:15:48.251994   10084 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0317 13:15:48.251994   10084 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0317 13:15:48.268981   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0317 13:15:48.302775   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0317 13:15:48.325486   10084 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0317 13:15:48.338040   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0317 13:15:48.370634   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0317 13:15:48.401453   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0317 13:15:48.431424   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0317 13:15:48.463628   10084 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0317 13:15:48.495288   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0317 13:15:48.525755   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0317 13:15:48.557328   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0317 13:15:48.587885   10084 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0317 13:15:48.606039   10084 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0317 13:15:48.617767   10084 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0317 13:15:48.653186   10084 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0317 13:15:48.684979   10084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:15:48.900040   10084 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0317 13:15:48.932913   10084 start.go:495] detecting cgroup driver to use...
	I0317 13:15:48.943837   10084 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0317 13:15:48.982219   10084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0317 13:15:49.017237   10084 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0317 13:15:49.064607   10084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0317 13:15:49.103129   10084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0317 13:15:49.140710   10084 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0317 13:15:49.204040   10084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0317 13:15:49.237448   10084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0317 13:15:49.293424   10084 ssh_runner.go:195] Run: which cri-dockerd
	I0317 13:15:49.313300   10084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0317 13:15:49.332960   10084 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0317 13:15:49.375679   10084 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0317 13:15:49.580406   10084 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0317 13:15:49.776163   10084 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0317 13:15:49.776531   10084 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0317 13:15:49.822471   10084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:15:50.030184   10084 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0317 13:15:52.659946   10084 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6296866s)
	I0317 13:15:52.673279   10084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0317 13:15:52.709044   10084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0317 13:15:52.749322   10084 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0317 13:15:52.968283   10084 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0317 13:15:53.191229   10084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:15:53.414323   10084 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0317 13:15:53.459805   10084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0317 13:15:53.496759   10084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:15:53.704311   10084 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0317 13:15:53.839399   10084 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0317 13:15:53.853577   10084 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0317 13:15:53.863927   10084 start.go:563] Will wait 60s for crictl version
	I0317 13:15:53.876591   10084 ssh_runner.go:195] Run: which crictl
	I0317 13:15:53.894838   10084 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0317 13:15:53.950431   10084 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0317 13:15:53.960321   10084 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0317 13:15:54.007178   10084 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0317 13:15:52.738920    2276 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:15:52.739038    2276 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:15:52.739038    2276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-471400 ).networkadapters[0]).ipaddresses[0]
	I0317 13:15:55.567523    2276 main.go:141] libmachine: [stdout =====>] : 172.25.31.3
	
	I0317 13:15:55.567523    2276 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:15:55.572503    2276 main.go:141] libmachine: Using SSH client type: native
	I0317 13:15:55.572503    2276 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.31.3 22 <nil> <nil>}
	I0317 13:15:55.572503    2276 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-471400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-471400/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-471400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0317 13:15:55.723578    2276 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0317 13:15:55.723578    2276 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0317 13:15:55.723578    2276 buildroot.go:174] setting up certificates
	I0317 13:15:55.723578    2276 provision.go:84] configureAuth start
	I0317 13:15:55.724568    2276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-471400 ).state
	I0317 13:15:54.049889   10084 out.go:235] * Preparing Kubernetes v1.32.2 on Docker 27.4.0 ...
	I0317 13:15:54.049998   10084 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0317 13:15:54.054184   10084 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0317 13:15:54.054184   10084 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0317 13:15:54.054184   10084 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0317 13:15:54.054184   10084 ip.go:211] Found interface: {Index:13 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:4b:84:d5 Flags:up|broadcast|multicast|running}
	I0317 13:15:54.057448   10084 ip.go:214] interface addr: fe80::f0c7:c31c:6237:ef35/64
	I0317 13:15:54.057448   10084 ip.go:214] interface addr: 172.25.16.1/20
	I0317 13:15:54.068551   10084 ssh_runner.go:195] Run: grep 172.25.16.1	host.minikube.internal$ /etc/hosts
	I0317 13:15:54.074601   10084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.25.16.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 13:15:54.098462   10084 kubeadm.go:883] updating cluster {Name:cert-expiration-735200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:cert-
expiration-735200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.26.33 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0317 13:15:54.098462   10084 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0317 13:15:54.109241   10084 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0317 13:15:54.136062   10084 docker.go:689] Got preloaded images: 
	I0317 13:15:54.136098   10084 docker.go:695] registry.k8s.io/kube-apiserver:v1.32.2 wasn't preloaded
	I0317 13:15:54.150007   10084 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0317 13:15:54.181579   10084 ssh_runner.go:195] Run: which lz4
	I0317 13:15:54.201164   10084 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0317 13:15:54.208859   10084 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0317 13:15:54.208859   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (349803115 bytes)
	I0317 13:15:56.339331   10084 docker.go:653] duration metric: took 2.1505782s to copy over tarball
	I0317 13:15:56.352800   10084 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0317 13:15:58.145545    2276 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:15:58.145545    2276 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:15:58.145846    2276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-471400 ).networkadapters[0]).ipaddresses[0]
	I0317 13:16:00.894292    2276 main.go:141] libmachine: [stdout =====>] : 172.25.31.3
	
	I0317 13:16:00.894292    2276 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:00.894292    2276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-471400 ).state
	I0317 13:16:03.132372    2276 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:16:03.132372    2276 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:03.132372    2276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-471400 ).networkadapters[0]).ipaddresses[0]
	I0317 13:16:05.846976    2276 main.go:141] libmachine: [stdout =====>] : 172.25.31.3
	
	I0317 13:16:05.846976    2276 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:05.846976    2276 provision.go:143] copyHostCerts
	I0317 13:16:05.846976    2276 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0317 13:16:05.846976    2276 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0317 13:16:05.846976    2276 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0317 13:16:05.856382    2276 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0317 13:16:05.856382    2276 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0317 13:16:05.856840    2276 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0317 13:16:05.857922    2276 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0317 13:16:05.857922    2276 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0317 13:16:05.857922    2276 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0317 13:16:05.859397    2276 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.pause-471400 san=[127.0.0.1 172.25.31.3 localhost minikube pause-471400]
	I0317 13:16:05.938970    2276 provision.go:177] copyRemoteCerts
	I0317 13:16:05.950346    2276 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0317 13:16:05.950346    2276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-471400 ).state
	I0317 13:16:05.014446   10084 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.661517s)
	I0317 13:16:05.014446   10084 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0317 13:16:05.103775   10084 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0317 13:16:05.124951   10084 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0317 13:16:05.175005   10084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:16:05.395064   10084 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0317 13:16:08.212308    2276 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:16:08.213540    2276 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:08.213680    2276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-471400 ).networkadapters[0]).ipaddresses[0]
	I0317 13:16:10.890015    2276 main.go:141] libmachine: [stdout =====>] : 172.25.31.3
	
	I0317 13:16:10.890015    2276 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:10.890570    2276 sshutil.go:53] new ssh client: &{IP:172.25.31.3 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\pause-471400\id_rsa Username:docker}
	I0317 13:16:11.006174    2276 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0557713s)
	I0317 13:16:11.007173    2276 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0317 13:16:11.071340    2276 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0317 13:16:11.127593    2276 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0317 13:16:11.182101    2276 provision.go:87] duration metric: took 15.4583505s to configureAuth
	I0317 13:16:11.182158    2276 buildroot.go:189] setting minikube options for container-runtime
	I0317 13:16:11.182766    2276 config.go:182] Loaded profile config "pause-471400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0317 13:16:11.182818    2276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-471400 ).state
	I0317 13:16:08.591003   10084 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.1958373s)
	I0317 13:16:08.603788   10084 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0317 13:16:08.643603   10084 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.32.2
	registry.k8s.io/kube-controller-manager:v1.32.2
	registry.k8s.io/kube-scheduler:v1.32.2
	registry.k8s.io/kube-proxy:v1.32.2
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0317 13:16:08.643603   10084 cache_images.go:84] Images are preloaded, skipping loading
	I0317 13:16:08.644197   10084 kubeadm.go:934] updating node { 172.25.26.33 8443 v1.32.2 docker true true} ...
	I0317 13:16:08.644305   10084 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=cert-expiration-735200 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.26.33
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:cert-expiration-735200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0317 13:16:08.654995   10084 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0317 13:16:08.735924   10084 cni.go:84] Creating CNI manager for ""
	I0317 13:16:08.736123   10084 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0317 13:16:08.736123   10084 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0317 13:16:08.736188   10084 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.25.26.33 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-735200 NodeName:cert-expiration-735200 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.25.26.33"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.25.26.33 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0317 13:16:08.736362   10084 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.25.26.33
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "cert-expiration-735200"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "172.25.26.33"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.25.26.33"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0317 13:16:08.748823   10084 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0317 13:16:08.768149   10084 binaries.go:44] Found k8s binaries, skipping transfer
	I0317 13:16:08.781501   10084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0317 13:16:08.801938   10084 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0317 13:16:08.844323   10084 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0317 13:16:08.879895   10084 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2300 bytes)
	I0317 13:16:08.924852   10084 ssh_runner.go:195] Run: grep 172.25.26.33	control-plane.minikube.internal$ /etc/hosts
	I0317 13:16:08.931934   10084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.25.26.33	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 13:16:08.967134   10084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:16:09.173356   10084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 13:16:09.208512   10084 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\cert-expiration-735200 for IP: 172.25.26.33
	I0317 13:16:09.208555   10084 certs.go:194] generating shared ca certs ...
	I0317 13:16:09.208555   10084 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:16:09.209647   10084 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0317 13:16:09.209994   10084 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0317 13:16:09.210232   10084 certs.go:256] generating profile certs ...
	I0317 13:16:09.210898   10084 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\cert-expiration-735200\client.key
	I0317 13:16:09.211013   10084 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\cert-expiration-735200\client.crt with IP's: []
	I0317 13:16:09.331844   10084 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\cert-expiration-735200\client.crt ...
	I0317 13:16:09.331844   10084 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\cert-expiration-735200\client.crt: {Name:mkfc95eec9a09c287b456d437f306c7394253466 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:16:09.333840   10084 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\cert-expiration-735200\client.key ...
	I0317 13:16:09.333840   10084 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\cert-expiration-735200\client.key: {Name:mk2681d3afe0b88cde2b3c3018a78070247e4809 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:16:09.334885   10084 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\cert-expiration-735200\apiserver.key.0e4993fe
	I0317 13:16:09.335852   10084 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\cert-expiration-735200\apiserver.crt.0e4993fe with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.25.26.33]
	I0317 13:16:09.820277   10084 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\cert-expiration-735200\apiserver.crt.0e4993fe ...
	I0317 13:16:09.820277   10084 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\cert-expiration-735200\apiserver.crt.0e4993fe: {Name:mkbb592f7beb6bef58a4fcc965da6636e600e7bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:16:09.821329   10084 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\cert-expiration-735200\apiserver.key.0e4993fe ...
	I0317 13:16:09.821329   10084 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\cert-expiration-735200\apiserver.key.0e4993fe: {Name:mkd1fd758c7a2ea97e380883f6fda251dc135c4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:16:09.822297   10084 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\cert-expiration-735200\apiserver.crt.0e4993fe -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\cert-expiration-735200\apiserver.crt
	I0317 13:16:09.837307   10084 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\cert-expiration-735200\apiserver.key.0e4993fe -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\cert-expiration-735200\apiserver.key
	I0317 13:16:09.838311   10084 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\cert-expiration-735200\proxy-client.key
	I0317 13:16:09.838311   10084 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\cert-expiration-735200\proxy-client.crt with IP's: []
	I0317 13:16:10.130052   10084 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\cert-expiration-735200\proxy-client.crt ...
	I0317 13:16:10.130052   10084 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\cert-expiration-735200\proxy-client.crt: {Name:mk195c67a2cbece6211ae24cd3c4b34154ce48a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:16:10.132013   10084 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\cert-expiration-735200\proxy-client.key ...
	I0317 13:16:10.132013   10084 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\cert-expiration-735200\proxy-client.key: {Name:mk6f7d9fb1ec8ffb89e0bcbd199a6def3c149cef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:16:10.146619   10084 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8940.pem (1338 bytes)
	W0317 13:16:10.147022   10084 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8940_empty.pem, impossibly tiny 0 bytes
	I0317 13:16:10.147022   10084 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0317 13:16:10.147022   10084 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0317 13:16:10.147022   10084 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0317 13:16:10.148044   10084 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0317 13:16:10.148044   10084 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem (1708 bytes)
	I0317 13:16:10.150485   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0317 13:16:10.201841   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0317 13:16:10.253384   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0317 13:16:10.296325   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0317 13:16:10.342516   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\cert-expiration-735200\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0317 13:16:10.389999   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\cert-expiration-735200\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0317 13:16:10.438138   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\cert-expiration-735200\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0317 13:16:10.490212   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\cert-expiration-735200\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0317 13:16:10.540288   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0317 13:16:10.595094   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8940.pem --> /usr/share/ca-certificates/8940.pem (1338 bytes)
	I0317 13:16:10.644409   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem --> /usr/share/ca-certificates/89402.pem (1708 bytes)
	I0317 13:16:10.691418   10084 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0317 13:16:10.734863   10084 ssh_runner.go:195] Run: openssl version
	I0317 13:16:10.756429   10084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/89402.pem && ln -fs /usr/share/ca-certificates/89402.pem /etc/ssl/certs/89402.pem"
	I0317 13:16:10.793498   10084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/89402.pem
	I0317 13:16:10.801501   10084 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 17 10:46 /usr/share/ca-certificates/89402.pem
	I0317 13:16:10.811483   10084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/89402.pem
	I0317 13:16:10.831433   10084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/89402.pem /etc/ssl/certs/3ec20f2e.0"
	I0317 13:16:10.862899   10084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0317 13:16:10.896714   10084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:16:10.907493   10084 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 17 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:16:10.919422   10084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:16:10.945921   10084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0317 13:16:10.978521   10084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8940.pem && ln -fs /usr/share/ca-certificates/8940.pem /etc/ssl/certs/8940.pem"
	I0317 13:16:11.009170   10084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8940.pem
	I0317 13:16:11.016802   10084 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 17 10:46 /usr/share/ca-certificates/8940.pem
	I0317 13:16:11.027980   10084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8940.pem
	I0317 13:16:11.048758   10084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8940.pem /etc/ssl/certs/51391683.0"
	I0317 13:16:11.083350   10084 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0317 13:16:11.091339   10084 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0317 13:16:11.091339   10084 kubeadm.go:392] StartCluster: {Name:cert-expiration-735200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:cert-exp
iration-735200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.26.33 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 13:16:11.100342   10084 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0317 13:16:11.141400   10084 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0317 13:16:11.171429   10084 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0317 13:16:11.203422   10084 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0317 13:16:11.227584   10084 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0317 13:16:11.227584   10084 kubeadm.go:157] found existing configuration files:
	
	I0317 13:16:11.239726   10084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0317 13:16:11.260551   10084 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0317 13:16:11.273179   10084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0317 13:16:11.317966   10084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0317 13:16:11.341628   10084 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0317 13:16:11.358696   10084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0317 13:16:11.394739   10084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0317 13:16:11.412816   10084 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0317 13:16:11.426816   10084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0317 13:16:11.456201   10084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0317 13:16:11.474816   10084 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0317 13:16:11.485877   10084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0317 13:16:11.503410   10084 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0317 13:16:12.000093   10084 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0317 13:16:13.395657    2276 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:16:13.395926    2276 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:13.396199    2276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-471400 ).networkadapters[0]).ipaddresses[0]
	I0317 13:16:16.023039    2276 main.go:141] libmachine: [stdout =====>] : 172.25.31.3
	
	I0317 13:16:16.023039    2276 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:16.029903    2276 main.go:141] libmachine: Using SSH client type: native
	I0317 13:16:16.030805    2276 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.31.3 22 <nil> <nil>}
	I0317 13:16:16.030805    2276 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0317 13:16:16.173038    2276 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0317 13:16:16.173038    2276 buildroot.go:70] root file system type: tmpfs
	I0317 13:16:16.173038    2276 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0317 13:16:16.173038    2276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-471400 ).state
	I0317 13:16:18.389773    2276 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:16:18.389830    2276 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:18.389963    2276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-471400 ).networkadapters[0]).ipaddresses[0]
	I0317 13:16:21.076529    2276 main.go:141] libmachine: [stdout =====>] : 172.25.31.3
	
	I0317 13:16:21.077479    2276 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:21.084373    2276 main.go:141] libmachine: Using SSH client type: native
	I0317 13:16:21.085064    2276 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.31.3 22 <nil> <nil>}
	I0317 13:16:21.085064    2276 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0317 13:16:21.275965    2276 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0317 13:16:21.276059    2276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-471400 ).state
	I0317 13:16:26.867476   10084 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0317 13:16:26.867476   10084 kubeadm.go:310] [preflight] Running pre-flight checks
	I0317 13:16:26.868119   10084 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0317 13:16:26.868490   10084 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0317 13:16:26.868490   10084 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0317 13:16:26.868490   10084 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0317 13:16:26.871592   10084 out.go:235]   - Generating certificates and keys ...
	I0317 13:16:26.872512   10084 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0317 13:16:26.872683   10084 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0317 13:16:26.872852   10084 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0317 13:16:26.872852   10084 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0317 13:16:26.873409   10084 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0317 13:16:26.873503   10084 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0317 13:16:26.873503   10084 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0317 13:16:26.874163   10084 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [cert-expiration-735200 localhost] and IPs [172.25.26.33 127.0.0.1 ::1]
	I0317 13:16:26.874163   10084 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0317 13:16:26.874756   10084 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [cert-expiration-735200 localhost] and IPs [172.25.26.33 127.0.0.1 ::1]
	I0317 13:16:26.874756   10084 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0317 13:16:26.874756   10084 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0317 13:16:26.874756   10084 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0317 13:16:26.875430   10084 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0317 13:16:26.875508   10084 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0317 13:16:26.875508   10084 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0317 13:16:26.875508   10084 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0317 13:16:26.876044   10084 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0317 13:16:26.876148   10084 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0317 13:16:26.876302   10084 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0317 13:16:26.876302   10084 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0317 13:16:26.898707   10084 out.go:235]   - Booting up control plane ...
	I0317 13:16:26.899692   10084 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0317 13:16:26.899757   10084 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0317 13:16:26.899757   10084 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0317 13:16:26.900475   10084 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0317 13:16:26.900609   10084 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0317 13:16:26.900854   10084 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0317 13:16:26.901252   10084 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0317 13:16:26.901706   10084 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0317 13:16:26.902329   10084 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002510142s
	I0317 13:16:26.902473   10084 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0317 13:16:26.902655   10084 kubeadm.go:310] [api-check] The API server is healthy after 7.502380999s
	I0317 13:16:26.902943   10084 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0317 13:16:26.903533   10084 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0317 13:16:26.903799   10084 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0317 13:16:26.904605   10084 kubeadm.go:310] [mark-control-plane] Marking the node cert-expiration-735200 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0317 13:16:26.904742   10084 kubeadm.go:310] [bootstrap-token] Using token: dn2h2j.b5b63hbxefnjchqa
	I0317 13:16:26.907543   10084 out.go:235]   - Configuring RBAC rules ...
	I0317 13:16:26.907997   10084 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0317 13:16:26.908220   10084 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0317 13:16:26.908220   10084 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0317 13:16:26.908929   10084 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0317 13:16:26.909475   10084 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0317 13:16:26.909670   10084 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0317 13:16:26.909670   10084 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0317 13:16:26.909670   10084 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0317 13:16:26.910216   10084 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0317 13:16:26.910216   10084 kubeadm.go:310] 
	I0317 13:16:26.910283   10084 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0317 13:16:26.910283   10084 kubeadm.go:310] 
	I0317 13:16:26.910283   10084 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0317 13:16:26.910283   10084 kubeadm.go:310] 
	I0317 13:16:26.910910   10084 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0317 13:16:26.910910   10084 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0317 13:16:26.910910   10084 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0317 13:16:26.910910   10084 kubeadm.go:310] 
	I0317 13:16:26.910910   10084 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0317 13:16:26.910910   10084 kubeadm.go:310] 
	I0317 13:16:26.910910   10084 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0317 13:16:26.911483   10084 kubeadm.go:310] 
	I0317 13:16:26.911628   10084 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0317 13:16:26.911628   10084 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0317 13:16:26.911628   10084 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0317 13:16:26.911628   10084 kubeadm.go:310] 
	I0317 13:16:26.911628   10084 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0317 13:16:26.911628   10084 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0317 13:16:26.911628   10084 kubeadm.go:310] 
	I0317 13:16:26.912558   10084 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token dn2h2j.b5b63hbxefnjchqa \
	I0317 13:16:26.912558   10084 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c322b0259bb8a6b4c6c1dc77ade13bbf0d2f6b9bd2605c58fcd3743199330256 \
	I0317 13:16:26.912558   10084 kubeadm.go:310] 	--control-plane 
	I0317 13:16:26.912558   10084 kubeadm.go:310] 
	I0317 13:16:26.912558   10084 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0317 13:16:26.912558   10084 kubeadm.go:310] 
	I0317 13:16:26.913552   10084 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token dn2h2j.b5b63hbxefnjchqa \
	I0317 13:16:26.913552   10084 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c322b0259bb8a6b4c6c1dc77ade13bbf0d2f6b9bd2605c58fcd3743199330256 
	I0317 13:16:26.913552   10084 cni.go:84] Creating CNI manager for ""
	I0317 13:16:26.913552   10084 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0317 13:16:26.917550   10084 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0317 13:16:23.537836    2276 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:16:23.537836    2276 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:23.537836    2276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-471400 ).networkadapters[0]).ipaddresses[0]
	I0317 13:16:26.246485    2276 main.go:141] libmachine: [stdout =====>] : 172.25.31.3
	
	I0317 13:16:26.247494    2276 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:26.253661    2276 main.go:141] libmachine: Using SSH client type: native
	I0317 13:16:26.254316    2276 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.31.3 22 <nil> <nil>}
	I0317 13:16:26.254316    2276 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0317 13:16:26.408871    2276 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0317 13:16:26.408871    2276 machine.go:96] duration metric: took 46.3759547s to provisionDockerMachine
	I0317 13:16:26.408871    2276 start.go:293] postStartSetup for "pause-471400" (driver="hyperv")
	I0317 13:16:26.408871    2276 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0317 13:16:26.421819    2276 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0317 13:16:26.421819    2276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-471400 ).state
	I0317 13:16:26.931531   10084 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0317 13:16:26.952139   10084 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0317 13:16:26.995391   10084 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0317 13:16:27.009696   10084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 13:16:27.012872   10084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes cert-expiration-735200 minikube.k8s.io/updated_at=2025_03_17T13_16_26_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=28b3ce799b018a38b7c40f89b465976263272e76 minikube.k8s.io/name=cert-expiration-735200 minikube.k8s.io/primary=true
	I0317 13:16:27.039086   10084 ops.go:34] apiserver oom_adj: -16
	I0317 13:16:27.468118   10084 kubeadm.go:1113] duration metric: took 472.5592ms to wait for elevateKubeSystemPrivileges
	I0317 13:16:27.468199   10084 kubeadm.go:394] duration metric: took 16.3766765s to StartCluster
	I0317 13:16:27.468264   10084 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:16:27.468487   10084 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0317 13:16:27.471105   10084 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:16:27.472633   10084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0317 13:16:27.472633   10084 start.go:235] Will wait 6m0s for node &{Name: IP:172.25.26.33 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0317 13:16:27.472756   10084 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0317 13:16:27.472851   10084 addons.go:69] Setting storage-provisioner=true in profile "cert-expiration-735200"
	I0317 13:16:27.472979   10084 addons.go:238] Setting addon storage-provisioner=true in "cert-expiration-735200"
	I0317 13:16:27.472979   10084 host.go:66] Checking if "cert-expiration-735200" exists ...
	I0317 13:16:27.474900   10084 addons.go:69] Setting default-storageclass=true in profile "cert-expiration-735200"
	I0317 13:16:27.474900   10084 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cert-expiration-735200"
	I0317 13:16:27.475046   10084 config.go:182] Loaded profile config "cert-expiration-735200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0317 13:16:27.476358   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-735200 ).state
	I0317 13:16:27.477483   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-735200 ).state
	I0317 13:16:27.477483   10084 out.go:177] * Verifying Kubernetes components...
	I0317 13:16:27.500270   10084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:16:27.776476   10084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.25.16.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0317 13:16:27.969681   10084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 13:16:28.456554   10084 start.go:971] {"host.minikube.internal": 172.25.16.1} host record injected into CoreDNS's ConfigMap
	I0317 13:16:28.461767   10084 api_server.go:52] waiting for apiserver process to appear ...
	I0317 13:16:28.473140   10084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:16:28.519497   10084 api_server.go:72] duration metric: took 1.0466338s to wait for apiserver process to appear ...
	I0317 13:16:28.519497   10084 api_server.go:88] waiting for apiserver healthz status ...
	I0317 13:16:28.519614   10084 api_server.go:253] Checking apiserver healthz at https://172.25.26.33:8443/healthz ...
	I0317 13:16:28.529782   10084 api_server.go:279] https://172.25.26.33:8443/healthz returned 200:
	ok
	I0317 13:16:28.532492   10084 api_server.go:141] control plane version: v1.32.2
	I0317 13:16:28.532492   10084 api_server.go:131] duration metric: took 12.9945ms to wait for apiserver health ...
	I0317 13:16:28.532586   10084 system_pods.go:43] waiting for kube-system pods to appear ...
	I0317 13:16:28.539203   10084 system_pods.go:59] 4 kube-system pods found
	I0317 13:16:28.539203   10084 system_pods.go:61] "etcd-cert-expiration-735200" [8e6cd2c1-d5aa-4d29-ab6c-55d11fdfb4a7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0317 13:16:28.539203   10084 system_pods.go:61] "kube-apiserver-cert-expiration-735200" [1ed63eb0-2c99-40d5-9855-c8cea8f64d13] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0317 13:16:28.539203   10084 system_pods.go:61] "kube-controller-manager-cert-expiration-735200" [3625e5bd-4ca0-4167-a45d-9a0a5965d35a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0317 13:16:28.539203   10084 system_pods.go:61] "kube-scheduler-cert-expiration-735200" [990b6835-e0d8-4354-90b2-f2a3b3422304] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0317 13:16:28.539203   10084 system_pods.go:74] duration metric: took 6.6175ms to wait for pod list to return data ...
	I0317 13:16:28.539203   10084 kubeadm.go:582] duration metric: took 1.06634s to wait for: map[apiserver:true system_pods:true]
	I0317 13:16:28.539203   10084 node_conditions.go:102] verifying NodePressure condition ...
	I0317 13:16:28.544027   10084 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0317 13:16:28.544027   10084 node_conditions.go:123] node cpu capacity is 2
	I0317 13:16:28.544027   10084 node_conditions.go:105] duration metric: took 4.8239ms to run NodePressure ...
	I0317 13:16:28.544027   10084 start.go:241] waiting for startup goroutines ...
	I0317 13:16:28.965274   10084 kapi.go:214] "coredns" deployment in "kube-system" namespace and "cert-expiration-735200" context rescaled to 1 replicas
	I0317 13:16:30.002822   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:16:30.002822   10084 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:30.005761   10084 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0317 13:16:28.884197    2276 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:16:28.884594    2276 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:28.884659    2276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-471400 ).networkadapters[0]).ipaddresses[0]
	I0317 13:16:31.888974    2276 main.go:141] libmachine: [stdout =====>] : 172.25.31.3
	
	I0317 13:16:31.888974    2276 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:31.888974    2276 sshutil.go:53] new ssh client: &{IP:172.25.31.3 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\pause-471400\id_rsa Username:docker}
	I0317 13:16:30.012112   10084 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0317 13:16:30.012112   10084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0317 13:16:30.012112   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-735200 ).state
	I0317 13:16:30.028399   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:16:30.028399   10084 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:30.030879   10084 addons.go:238] Setting addon default-storageclass=true in "cert-expiration-735200"
	I0317 13:16:30.030879   10084 host.go:66] Checking if "cert-expiration-735200" exists ...
	I0317 13:16:30.032275   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-735200 ).state
	I0317 13:16:32.017629    2276 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.5957478s)
	I0317 13:16:32.031225    2276 ssh_runner.go:195] Run: cat /etc/os-release
	I0317 13:16:32.039551    2276 info.go:137] Remote host: Buildroot 2023.02.9
	I0317 13:16:32.039551    2276 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0317 13:16:32.039551    2276 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0317 13:16:32.041782    2276 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem -> 89402.pem in /etc/ssl/certs
	I0317 13:16:32.053772    2276 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0317 13:16:32.081080    2276 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem --> /etc/ssl/certs/89402.pem (1708 bytes)
	I0317 13:16:32.141719    2276 start.go:296] duration metric: took 5.7327834s for postStartSetup
	I0317 13:16:32.141841    2276 fix.go:56] duration metric: took 54.4615566s for fixHost
	I0317 13:16:32.141916    2276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-471400 ).state
	I0317 13:16:34.678109    2276 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:16:34.678324    2276 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:34.678324    2276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-471400 ).networkadapters[0]).ipaddresses[0]
	I0317 13:16:32.662000   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:16:32.662000   10084 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:32.662465   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-735200 ).networkadapters[0]).ipaddresses[0]
	I0317 13:16:32.666262   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:16:32.666262   10084 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:32.666335   10084 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0317 13:16:32.666335   10084 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0317 13:16:32.666393   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-735200 ).state
	I0317 13:16:35.202742   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:16:35.202742   10084 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:35.202862   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-735200 ).networkadapters[0]).ipaddresses[0]
	I0317 13:16:35.646605   10084 main.go:141] libmachine: [stdout =====>] : 172.25.26.33
	
	I0317 13:16:35.646889   10084 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:35.647418   10084 sshutil.go:53] new ssh client: &{IP:172.25.26.33 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\cert-expiration-735200\id_rsa Username:docker}
	I0317 13:16:35.805277   10084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0317 13:16:38.121052   10084 main.go:141] libmachine: [stdout =====>] : 172.25.26.33
	
	I0317 13:16:38.121052   10084 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:38.121602   10084 sshutil.go:53] new ssh client: &{IP:172.25.26.33 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\cert-expiration-735200\id_rsa Username:docker}
	I0317 13:16:38.285249   10084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0317 13:16:38.472218   10084 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0317 13:16:38.475871   10084 addons.go:514] duration metric: took 11.0029913s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0317 13:16:38.475993   10084 start.go:246] waiting for cluster config update ...
	I0317 13:16:38.475993   10084 start.go:255] writing updated cluster config ...
	I0317 13:16:38.487912   10084 ssh_runner.go:195] Run: rm -f paused
	I0317 13:16:38.647555   10084 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0317 13:16:38.652513   10084 out.go:177] * Done! kubectl is now configured to use "cert-expiration-735200" cluster and "default" namespace by default
	I0317 13:16:37.562007    2276 main.go:141] libmachine: [stdout =====>] : 172.25.31.3
	
	I0317 13:16:37.562651    2276 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:37.570741    2276 main.go:141] libmachine: Using SSH client type: native
	I0317 13:16:37.571307    2276 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.31.3 22 <nil> <nil>}
	I0317 13:16:37.571307    2276 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0317 13:16:37.724030    2276 main.go:141] libmachine: SSH cmd err, output: <nil>: 1742217397.748398288
	
	I0317 13:16:37.724117    2276 fix.go:216] guest clock: 1742217397.748398288
	I0317 13:16:37.724117    2276 fix.go:229] Guest: 2025-03-17 13:16:37.748398288 +0000 UTC Remote: 2025-03-17 13:16:32.1418416 +0000 UTC m=+320.366006801 (delta=5.606556688s)
	I0317 13:16:37.724238    2276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-471400 ).state
	I0317 13:16:39.990292    2276 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:16:39.990525    2276 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:39.990525    2276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-471400 ).networkadapters[0]).ipaddresses[0]
	I0317 13:16:42.747631    8032 start.go:364] duration metric: took 4m17.4256234s to acquireMachinesLock for "kubernetes-upgrade-816300"
	I0317 13:16:42.748084    8032 start.go:96] Skipping create...Using existing machine configuration
	I0317 13:16:42.748125    8032 fix.go:54] fixHost starting: 
	I0317 13:16:42.749144    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-816300 ).state
	I0317 13:16:44.980088    8032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:16:44.980321    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:44.980390    8032 fix.go:112] recreateIfNeeded on kubernetes-upgrade-816300: state=Running err=<nil>
	W0317 13:16:44.980390    8032 fix.go:138] unexpected machine state, will restart: <nil>
	I0317 13:16:44.984751    8032 out.go:177] * Updating the running hyperv "kubernetes-upgrade-816300" VM ...
	I0317 13:16:42.589156    2276 main.go:141] libmachine: [stdout =====>] : 172.25.31.3
	
	I0317 13:16:42.589219    2276 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:42.597230    2276 main.go:141] libmachine: Using SSH client type: native
	I0317 13:16:42.597230    2276 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.31.3 22 <nil> <nil>}
	I0317 13:16:42.597230    2276 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1742217397
	I0317 13:16:42.747276    2276 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Mar 17 13:16:37 UTC 2025
	
	I0317 13:16:42.747370    2276 fix.go:236] clock set: Mon Mar 17 13:16:37 UTC 2025
	 (err=<nil>)
	I0317 13:16:42.747370    2276 start.go:83] releasing machines lock for "pause-471400", held for 1m5.067322s
	I0317 13:16:42.747631    2276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-471400 ).state
	I0317 13:16:44.990180    2276 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:16:44.990180    2276 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:44.990180    2276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-471400 ).networkadapters[0]).ipaddresses[0]
	I0317 13:16:44.986912    8032 machine.go:93] provisionDockerMachine start ...
	I0317 13:16:44.986912    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-816300 ).state
	I0317 13:16:47.323165    8032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:16:47.324072    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:47.324293    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-816300 ).networkadapters[0]).ipaddresses[0]
	I0317 13:16:47.705533    2276 main.go:141] libmachine: [stdout =====>] : 172.25.31.3
	
	I0317 13:16:47.705533    2276 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:47.711963    2276 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0317 13:16:47.712167    2276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-471400 ).state
	I0317 13:16:47.725066    2276 ssh_runner.go:195] Run: cat /version.json
	I0317 13:16:47.725066    2276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-471400 ).state
	I0317 13:16:50.135254    2276 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:16:50.135354    2276 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:50.135575    2276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-471400 ).networkadapters[0]).ipaddresses[0]
	I0317 13:16:50.136208    2276 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:16:50.136525    2276 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:50.136525    2276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-471400 ).networkadapters[0]).ipaddresses[0]
	I0317 13:16:50.214206    8032 main.go:141] libmachine: [stdout =====>] : 172.25.31.15
	
	I0317 13:16:50.214669    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:50.221082    8032 main.go:141] libmachine: Using SSH client type: native
	I0317 13:16:50.221578    8032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.31.15 22 <nil> <nil>}
	I0317 13:16:50.221578    8032 main.go:141] libmachine: About to run SSH command:
	hostname
	I0317 13:16:50.360096    8032 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-816300
	
	I0317 13:16:50.360096    8032 buildroot.go:166] provisioning hostname "kubernetes-upgrade-816300"
	I0317 13:16:50.360096    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-816300 ).state
	I0317 13:16:52.766650    8032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:16:52.767101    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:52.767101    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-816300 ).networkadapters[0]).ipaddresses[0]
	I0317 13:16:52.938839    2276 main.go:141] libmachine: [stdout =====>] : 172.25.31.3
	
	I0317 13:16:52.938839    2276 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:52.939792    2276 sshutil.go:53] new ssh client: &{IP:172.25.31.3 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\pause-471400\id_rsa Username:docker}
	I0317 13:16:52.969403    2276 main.go:141] libmachine: [stdout =====>] : 172.25.31.3
	
	I0317 13:16:52.969403    2276 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:52.969403    2276 sshutil.go:53] new ssh client: &{IP:172.25.31.3 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\pause-471400\id_rsa Username:docker}
	I0317 13:16:53.037516    2276 ssh_runner.go:235] Completed: cat /version.json: (5.31239s)
	I0317 13:16:53.049341    2276 ssh_runner.go:195] Run: systemctl --version
	I0317 13:16:53.054990    2276 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.3428967s)
	W0317 13:16:53.054990    2276 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0317 13:16:53.076344    2276 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0317 13:16:53.086196    2276 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0317 13:16:53.098700    2276 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0317 13:16:53.122530    2276 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0317 13:16:53.122530    2276 start.go:495] detecting cgroup driver to use...
	I0317 13:16:53.122530    2276 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0317 13:16:53.169780    2276 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0317 13:16:53.169780    2276 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0317 13:16:53.172931    2276 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0317 13:16:53.208885    2276 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0317 13:16:53.231463    2276 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0317 13:16:53.242982    2276 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0317 13:16:53.285654    2276 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0317 13:16:53.320532    2276 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0317 13:16:53.354949    2276 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0317 13:16:53.398468    2276 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0317 13:16:53.435765    2276 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0317 13:16:53.470095    2276 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0317 13:16:53.503234    2276 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0317 13:16:53.540051    2276 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0317 13:16:53.570936    2276 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0317 13:16:53.603642    2276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:16:53.888025    2276 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0317 13:16:53.921380    2276 start.go:495] detecting cgroup driver to use...
	I0317 13:16:53.933070    2276 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0317 13:16:53.973904    2276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0317 13:16:54.012103    2276 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0317 13:16:54.068213    2276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0317 13:16:54.117839    2276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0317 13:16:54.145797    2276 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0317 13:16:54.206196    2276 ssh_runner.go:195] Run: which cri-dockerd
	I0317 13:16:54.225233    2276 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0317 13:16:54.245716    2276 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0317 13:16:54.297550    2276 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0317 13:16:54.582515    2276 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0317 13:16:54.862931    2276 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0317 13:16:54.863202    2276 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0317 13:16:54.910896    2276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:16:55.195275    2276 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0317 13:16:55.403447    8032 main.go:141] libmachine: [stdout =====>] : 172.25.31.15
	
	I0317 13:16:55.404199    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:55.412586    8032 main.go:141] libmachine: Using SSH client type: native
	I0317 13:16:55.413488    8032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.31.15 22 <nil> <nil>}
	I0317 13:16:55.413488    8032 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-816300 && echo "kubernetes-upgrade-816300" | sudo tee /etc/hostname
	I0317 13:16:55.577055    8032 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-816300
	
	I0317 13:16:55.577055    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-816300 ).state
	I0317 13:16:57.826808    8032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:16:57.826808    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:57.827078    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-816300 ).networkadapters[0]).ipaddresses[0]
	I0317 13:17:00.390013    8032 main.go:141] libmachine: [stdout =====>] : 172.25.31.15
	
	I0317 13:17:00.390013    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:17:00.396745    8032 main.go:141] libmachine: Using SSH client type: native
	I0317 13:17:00.397211    8032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.31.15 22 <nil> <nil>}
	I0317 13:17:00.397332    8032 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-816300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-816300/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-816300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0317 13:17:00.527700    8032 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0317 13:17:00.527700    8032 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0317 13:17:00.527772    8032 buildroot.go:174] setting up certificates
	I0317 13:17:00.527947    8032 provision.go:84] configureAuth start
	I0317 13:17:00.528018    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-816300 ).state
	I0317 13:17:02.745460    8032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:17:02.745713    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:17:02.745773    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-816300 ).networkadapters[0]).ipaddresses[0]
	I0317 13:17:05.343749    8032 main.go:141] libmachine: [stdout =====>] : 172.25.31.15
	
	I0317 13:17:05.343749    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:17:05.344406    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-816300 ).state
	I0317 13:17:07.588141    8032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:17:07.588141    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:17:07.588479    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-816300 ).networkadapters[0]).ipaddresses[0]
	I0317 13:17:08.331869    2276 ssh_runner.go:235] Completed: sudo systemctl restart docker: (13.1363806s)
	I0317 13:17:08.344028    2276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0317 13:17:08.390206    2276 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0317 13:17:08.439766    2276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0317 13:17:08.478915    2276 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0317 13:17:08.702497    2276 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0317 13:17:08.935046    2276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:17:09.149392    2276 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0317 13:17:09.194595    2276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0317 13:17:09.234225    2276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:17:09.461143    2276 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0317 13:17:09.599960    2276 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0317 13:17:09.612335    2276 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0317 13:17:09.621164    2276 start.go:563] Will wait 60s for crictl version
	I0317 13:17:09.633829    2276 ssh_runner.go:195] Run: which crictl
	I0317 13:17:09.653651    2276 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0317 13:17:09.714488    2276 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0317 13:17:09.725250    2276 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0317 13:17:09.775143    2276 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0317 13:17:09.818722    2276 out.go:235] * Preparing Kubernetes v1.32.2 on Docker 27.4.0 ...
	I0317 13:17:09.819255    2276 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0317 13:17:09.825335    2276 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0317 13:17:09.825335    2276 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0317 13:17:09.825335    2276 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0317 13:17:09.825335    2276 ip.go:211] Found interface: {Index:13 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:4b:84:d5 Flags:up|broadcast|multicast|running}
	I0317 13:17:09.829358    2276 ip.go:214] interface addr: fe80::f0c7:c31c:6237:ef35/64
	I0317 13:17:09.829358    2276 ip.go:214] interface addr: 172.25.16.1/20
	I0317 13:17:09.839894    2276 ssh_runner.go:195] Run: grep 172.25.16.1	host.minikube.internal$ /etc/hosts
	I0317 13:17:09.847495    2276 kubeadm.go:883] updating cluster {Name:pause-471400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:pause-471400 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.31.3 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-s
ecurity-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0317 13:17:09.847687    2276 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0317 13:17:09.857476    2276 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0317 13:17:09.887415    2276 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.32.2
	registry.k8s.io/kube-controller-manager:v1.32.2
	registry.k8s.io/kube-scheduler:v1.32.2
	registry.k8s.io/kube-proxy:v1.32.2
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0317 13:17:09.887415    2276 docker.go:619] Images already preloaded, skipping extraction
	I0317 13:17:09.897657    2276 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0317 13:17:09.926302    2276 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.32.2
	registry.k8s.io/kube-proxy:v1.32.2
	registry.k8s.io/kube-controller-manager:v1.32.2
	registry.k8s.io/kube-scheduler:v1.32.2
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0317 13:17:09.926360    2276 cache_images.go:84] Images are preloaded, skipping loading
	I0317 13:17:09.926360    2276 kubeadm.go:934] updating node { 172.25.31.3 8443 v1.32.2 docker true true} ...
	I0317 13:17:09.926651    2276 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-471400 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.31.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:pause-471400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0317 13:17:09.938536    2276 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0317 13:17:10.007970    2276 cni.go:84] Creating CNI manager for ""
	I0317 13:17:10.008153    2276 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0317 13:17:10.008222    2276 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0317 13:17:10.008222    2276 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.25.31.3 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-471400 NodeName:pause-471400 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.25.31.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.25.31.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0317 13:17:10.008222    2276 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.25.31.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "pause-471400"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "172.25.31.3"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.25.31.3"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0317 13:17:10.020853    2276 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0317 13:17:10.042874    2276 binaries.go:44] Found k8s binaries, skipping transfer
	I0317 13:17:10.055851    2276 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0317 13:17:10.080879    2276 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I0317 13:17:10.119024    2276 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0317 13:17:10.156661    2276 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I0317 13:17:10.206751    2276 ssh_runner.go:195] Run: grep 172.25.31.3	control-plane.minikube.internal$ /etc/hosts
	I0317 13:17:10.227921    2276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:17:10.471823    2276 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 13:17:10.512237    2276 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\pause-471400 for IP: 172.25.31.3
	I0317 13:17:10.512366    2276 certs.go:194] generating shared ca certs ...
	I0317 13:17:10.512366    2276 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:17:10.513072    2276 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0317 13:17:10.513072    2276 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0317 13:17:10.514001    2276 certs.go:256] generating profile certs ...
	I0317 13:17:10.514001    2276 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\pause-471400\client.key
	I0317 13:17:10.515006    2276 certs.go:359] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\pause-471400\apiserver.key.8fb62966
	I0317 13:17:10.515006    2276 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\pause-471400\proxy-client.key
	I0317 13:17:10.518077    2276 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8940.pem (1338 bytes)
	W0317 13:17:10.518619    2276 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8940_empty.pem, impossibly tiny 0 bytes
	I0317 13:17:10.518829    2276 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0317 13:17:10.519261    2276 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0317 13:17:10.519818    2276 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0317 13:17:10.520395    2276 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0317 13:17:10.521375    2276 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem (1708 bytes)
	I0317 13:17:10.524477    2276 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0317 13:17:10.579838    2276 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0317 13:17:10.634157    2276 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0317 13:17:10.687403    2276 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0317 13:17:10.738154    2276 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\pause-471400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0317 13:17:10.789417    2276 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\pause-471400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0317 13:17:10.839155    2276 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\pause-471400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0317 13:17:10.889720    2276 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\pause-471400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0317 13:17:10.942549    2276 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8940.pem --> /usr/share/ca-certificates/8940.pem (1338 bytes)
	I0317 13:17:10.995256    2276 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem --> /usr/share/ca-certificates/89402.pem (1708 bytes)
	I0317 13:17:11.047503    2276 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0317 13:17:11.103120    2276 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0317 13:17:11.151731    2276 ssh_runner.go:195] Run: openssl version
	I0317 13:17:11.178772    2276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8940.pem && ln -fs /usr/share/ca-certificates/8940.pem /etc/ssl/certs/8940.pem"
	I0317 13:17:11.213088    2276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8940.pem
	I0317 13:17:11.221418    2276 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 17 10:46 /usr/share/ca-certificates/8940.pem
	I0317 13:17:11.236292    2276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8940.pem
	I0317 13:17:11.268836    2276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8940.pem /etc/ssl/certs/51391683.0"
	I0317 13:17:11.302583    2276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/89402.pem && ln -fs /usr/share/ca-certificates/89402.pem /etc/ssl/certs/89402.pem"
	I0317 13:17:11.339345    2276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/89402.pem
	I0317 13:17:11.347411    2276 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 17 10:46 /usr/share/ca-certificates/89402.pem
	I0317 13:17:11.360829    2276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/89402.pem
	I0317 13:17:11.383132    2276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/89402.pem /etc/ssl/certs/3ec20f2e.0"
	I0317 13:17:11.415690    2276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0317 13:17:11.448945    2276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:17:11.456845    2276 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 17 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:17:11.469118    2276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:17:11.490233    2276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0317 13:17:11.519232    2276 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0317 13:17:11.540048    2276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0317 13:17:11.559161    2276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0317 13:17:11.580548    2276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0317 13:17:11.601929    2276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0317 13:17:11.624374    2276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0317 13:17:11.644747    2276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0317 13:17:11.653844    2276 kubeadm.go:392] StartCluster: {Name:pause-471400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:pause-471400 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.31.3 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-secu
rity-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 13:17:11.664527    2276 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0317 13:17:11.707129    2276 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0317 13:17:11.725928    2276 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0317 13:17:11.725990    2276 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0317 13:17:11.737844    2276 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0317 13:17:11.757601    2276 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0317 13:17:11.759156    2276 kubeconfig.go:125] found "pause-471400" server: "https://172.25.31.3:8443"
	I0317 13:17:11.762253    2276 kapi.go:59] client config for pause-471400: &rest.Config{Host:"https://172.25.31.3:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\pause-471400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\pause-471400\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2e292e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0317 13:17:11.764250    2276 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0317 13:17:11.764250    2276 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0317 13:17:11.764250    2276 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0317 13:17:11.764364    2276 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0317 13:17:11.776295    2276 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0317 13:17:11.796925    2276 kubeadm.go:630] The running cluster does not require reconfiguration: 172.25.31.3
	I0317 13:17:11.797895    2276 kubeadm.go:1160] stopping kube-system containers ...
	I0317 13:17:11.806877    2276 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0317 13:17:11.844542    2276 docker.go:483] Stopping containers: [d4f557bada23 bec7db06f9e9 0b09a5de1f0b c0ee58f77451 cecd0f7a3b60 48490bf5143c a7133843d6ed 5d96f9d335df c59c53abe2dd a77930f3d721 587f0dda7141 4a105f3090f3]
	I0317 13:17:11.859204    2276 ssh_runner.go:195] Run: docker stop d4f557bada23 bec7db06f9e9 0b09a5de1f0b c0ee58f77451 cecd0f7a3b60 48490bf5143c a7133843d6ed 5d96f9d335df c59c53abe2dd a77930f3d721 587f0dda7141 4a105f3090f3
	I0317 13:17:11.905716    2276 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0317 13:17:10.354698    8032 main.go:141] libmachine: [stdout =====>] : 172.25.31.15
	
	I0317 13:17:10.354698    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:17:10.354698    8032 provision.go:143] copyHostCerts
	I0317 13:17:10.355720    8032 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0317 13:17:10.355720    8032 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0317 13:17:10.356483    8032 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0317 13:17:10.358133    8032 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0317 13:17:10.358188    8032 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0317 13:17:10.358415    8032 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0317 13:17:10.360294    8032 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0317 13:17:10.360294    8032 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0317 13:17:10.360294    8032 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0317 13:17:10.362254    8032 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.kubernetes-upgrade-816300 san=[127.0.0.1 172.25.31.15 kubernetes-upgrade-816300 localhost minikube]
	I0317 13:17:10.492765    8032 provision.go:177] copyRemoteCerts
	I0317 13:17:10.504249    8032 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0317 13:17:10.504249    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-816300 ).state
	I0317 13:17:12.877622    8032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:17:12.877722    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:17:12.877779    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-816300 ).networkadapters[0]).ipaddresses[0]
	I0317 13:17:11.981288    2276 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0317 13:17:12.004626    2276 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5651 Mar 17 13:10 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5655 Mar 17 13:10 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Mar 17 13:10 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5603 Mar 17 13:10 /etc/kubernetes/scheduler.conf
	
	I0317 13:17:12.016225    2276 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0317 13:17:12.052486    2276 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0317 13:17:12.084264    2276 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0317 13:17:12.102367    2276 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0317 13:17:12.113372    2276 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0317 13:17:12.146394    2276 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0317 13:17:12.166152    2276 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0317 13:17:12.179056    2276 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0317 13:17:12.207618    2276 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0317 13:17:12.228938    2276 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0317 13:17:12.643075    2276 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0317 13:17:13.971104    2276 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.3280142s)
	I0317 13:17:13.971104    2276 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0317 13:17:14.302698    2276 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0317 13:17:14.406407    2276 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0317 13:17:14.508662    2276 api_server.go:52] waiting for apiserver process to appear ...
	I0317 13:17:14.519080    2276 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:17:15.025301    2276 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:17:15.523753    2276 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:17:16.019967    2276 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:17:16.520248    2276 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:17:16.548439    2276 api_server.go:72] duration metric: took 2.0397543s to wait for apiserver process to appear ...
	I0317 13:17:16.548565    2276 api_server.go:88] waiting for apiserver healthz status ...
	I0317 13:17:16.548633    2276 api_server.go:253] Checking apiserver healthz at https://172.25.31.3:8443/healthz ...
	I0317 13:17:15.528908    8032 main.go:141] libmachine: [stdout =====>] : 172.25.31.15
	
	I0317 13:17:15.529579    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:17:15.530037    8032 sshutil.go:53] new ssh client: &{IP:172.25.31.15 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\kubernetes-upgrade-816300\id_rsa Username:docker}
	I0317 13:17:15.641896    8032 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.1375892s)
	I0317 13:17:15.642426    8032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1241 bytes)
	I0317 13:17:15.695164    8032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0317 13:17:15.747508    8032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0317 13:17:15.802865    8032 provision.go:87] duration metric: took 15.2747475s to configureAuth
	I0317 13:17:15.802865    8032 buildroot.go:189] setting minikube options for container-runtime
	I0317 13:17:15.803856    8032 config.go:182] Loaded profile config "kubernetes-upgrade-816300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0317 13:17:15.803856    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-816300 ).state
	I0317 13:17:18.099138    8032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:17:18.099138    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:17:18.099815    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-816300 ).networkadapters[0]).ipaddresses[0]
	I0317 13:17:19.729867    2276 api_server.go:279] https://172.25.31.3:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0317 13:17:19.729952    2276 api_server.go:103] status: https://172.25.31.3:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0317 13:17:19.729952    2276 api_server.go:253] Checking apiserver healthz at https://172.25.31.3:8443/healthz ...
	I0317 13:17:19.882354    2276 api_server.go:279] https://172.25.31.3:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0317 13:17:19.882452    2276 api_server.go:103] status: https://172.25.31.3:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0317 13:17:20.048918    2276 api_server.go:253] Checking apiserver healthz at https://172.25.31.3:8443/healthz ...
	I0317 13:17:20.057566    2276 api_server.go:279] https://172.25.31.3:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0317 13:17:20.057825    2276 api_server.go:103] status: https://172.25.31.3:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0317 13:17:20.548877    2276 api_server.go:253] Checking apiserver healthz at https://172.25.31.3:8443/healthz ...
	I0317 13:17:20.564020    2276 api_server.go:279] https://172.25.31.3:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0317 13:17:20.564020    2276 api_server.go:103] status: https://172.25.31.3:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0317 13:17:21.050015    2276 api_server.go:253] Checking apiserver healthz at https://172.25.31.3:8443/healthz ...
	I0317 13:17:21.058908    2276 api_server.go:279] https://172.25.31.3:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0317 13:17:21.059876    2276 api_server.go:103] status: https://172.25.31.3:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0317 13:17:21.549308    2276 api_server.go:253] Checking apiserver healthz at https://172.25.31.3:8443/healthz ...
	I0317 13:17:21.559326    2276 api_server.go:279] https://172.25.31.3:8443/healthz returned 200:
	ok
	I0317 13:17:21.571712    2276 api_server.go:141] control plane version: v1.32.2
	I0317 13:17:21.571712    2276 api_server.go:131] duration metric: took 5.0230907s to wait for apiserver health ...
	I0317 13:17:21.571712    2276 cni.go:84] Creating CNI manager for ""
	I0317 13:17:21.571712    2276 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0317 13:17:21.575214    2276 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0317 13:17:21.590464    2276 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0317 13:17:21.617169    2276 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0317 13:17:21.653927    2276 system_pods.go:43] waiting for kube-system pods to appear ...
	I0317 13:17:21.660057    2276 system_pods.go:59] 6 kube-system pods found
	I0317 13:17:21.660057    2276 system_pods.go:61] "coredns-668d6bf9bc-2xpj4" [704a1878-5d2f-4871-98ac-ced7ddfbc684] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 13:17:21.660057    2276 system_pods.go:61] "etcd-pause-471400" [98e4a9fc-1ef0-4a40-a394-634314ddd363] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0317 13:17:21.660057    2276 system_pods.go:61] "kube-apiserver-pause-471400" [032360c0-bb5e-497f-ac77-134a17fab99f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0317 13:17:21.660057    2276 system_pods.go:61] "kube-controller-manager-pause-471400" [b982ef81-1c85-4fd8-838b-2b8bbf1993d5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0317 13:17:21.660057    2276 system_pods.go:61] "kube-proxy-2w5n2" [d2be3017-491d-427e-982e-7fcdf387b94a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0317 13:17:21.660057    2276 system_pods.go:61] "kube-scheduler-pause-471400" [0a95a12f-a384-429a-93e0-8c27dbbe9c3f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0317 13:17:21.660057    2276 system_pods.go:74] duration metric: took 6.0764ms to wait for pod list to return data ...
	I0317 13:17:21.660057    2276 node_conditions.go:102] verifying NodePressure condition ...
	I0317 13:17:21.670471    2276 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0317 13:17:21.670533    2276 node_conditions.go:123] node cpu capacity is 2
	I0317 13:17:21.670533    2276 node_conditions.go:105] duration metric: took 10.476ms to run NodePressure ...
	I0317 13:17:21.670591    2276 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0317 13:17:20.737824    8032 main.go:141] libmachine: [stdout =====>] : 172.25.31.15
	
	I0317 13:17:20.737824    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:17:20.745017    8032 main.go:141] libmachine: Using SSH client type: native
	I0317 13:17:20.745633    8032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.31.15 22 <nil> <nil>}
	I0317 13:17:20.745633    8032 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0317 13:17:20.879466    8032 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0317 13:17:20.879529    8032 buildroot.go:70] root file system type: tmpfs
	I0317 13:17:20.879529    8032 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0317 13:17:20.879529    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-816300 ).state
	I0317 13:17:23.158531    8032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:17:23.158572    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:17:23.158572    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-816300 ).networkadapters[0]).ipaddresses[0]
	I0317 13:17:22.434282    2276 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0317 13:17:22.441573    2276 kubeadm.go:739] kubelet initialised
	I0317 13:17:22.441720    2276 kubeadm.go:740] duration metric: took 6.4239ms waiting for restarted kubelet to initialise ...
	I0317 13:17:22.441778    2276 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 13:17:22.445549    2276 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-2xpj4" in "kube-system" namespace to be "Ready" ...
	I0317 13:17:24.455661    2276 pod_ready.go:103] pod "coredns-668d6bf9bc-2xpj4" in "kube-system" namespace has status "Ready":"False"
	I0317 13:17:25.782731    8032 main.go:141] libmachine: [stdout =====>] : 172.25.31.15
	
	I0317 13:17:25.782731    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:17:25.788704    8032 main.go:141] libmachine: Using SSH client type: native
	I0317 13:17:25.789625    8032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.31.15 22 <nil> <nil>}
	I0317 13:17:25.789782    8032 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0317 13:17:25.945738    8032 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0317 13:17:25.945738    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-816300 ).state
	I0317 13:17:28.192555    8032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:17:28.193208    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:17:28.193293    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-816300 ).networkadapters[0]).ipaddresses[0]
	I0317 13:17:26.956163    2276 pod_ready.go:103] pod "coredns-668d6bf9bc-2xpj4" in "kube-system" namespace has status "Ready":"False"
	I0317 13:17:29.454463    2276 pod_ready.go:103] pod "coredns-668d6bf9bc-2xpj4" in "kube-system" namespace has status "Ready":"False"
	I0317 13:17:29.955675    2276 pod_ready.go:93] pod "coredns-668d6bf9bc-2xpj4" in "kube-system" namespace has status "Ready":"True"
	I0317 13:17:29.955738    2276 pod_ready.go:82] duration metric: took 7.5100541s for pod "coredns-668d6bf9bc-2xpj4" in "kube-system" namespace to be "Ready" ...
	I0317 13:17:29.955738    2276 pod_ready.go:79] waiting up to 4m0s for pod "etcd-pause-471400" in "kube-system" namespace to be "Ready" ...
	I0317 13:17:29.962561    2276 pod_ready.go:93] pod "etcd-pause-471400" in "kube-system" namespace has status "Ready":"True"
	I0317 13:17:29.962561    2276 pod_ready.go:82] duration metric: took 6.8227ms for pod "etcd-pause-471400" in "kube-system" namespace to be "Ready" ...
	I0317 13:17:29.962561    2276 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-pause-471400" in "kube-system" namespace to be "Ready" ...
	I0317 13:17:30.801332    8032 main.go:141] libmachine: [stdout =====>] : 172.25.31.15
	
	I0317 13:17:30.801598    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:17:30.807053    8032 main.go:141] libmachine: Using SSH client type: native
	I0317 13:17:30.807528    8032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.31.15 22 <nil> <nil>}
	I0317 13:17:30.807528    8032 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0317 13:17:30.945771    8032 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0317 13:17:30.945771    8032 machine.go:96] duration metric: took 45.9583445s to provisionDockerMachine
	I0317 13:17:30.945771    8032 start.go:293] postStartSetup for "kubernetes-upgrade-816300" (driver="hyperv")
	I0317 13:17:30.945771    8032 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0317 13:17:30.957647    8032 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0317 13:17:30.957647    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-816300 ).state
	I0317 13:17:33.148435    8032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:17:33.148613    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:17:33.148613    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-816300 ).networkadapters[0]).ipaddresses[0]
	I0317 13:17:31.972597    2276 pod_ready.go:103] pod "kube-apiserver-pause-471400" in "kube-system" namespace has status "Ready":"False"
	I0317 13:17:33.471689    2276 pod_ready.go:93] pod "kube-apiserver-pause-471400" in "kube-system" namespace has status "Ready":"True"
	I0317 13:17:33.471689    2276 pod_ready.go:82] duration metric: took 3.5090886s for pod "kube-apiserver-pause-471400" in "kube-system" namespace to be "Ready" ...
	I0317 13:17:33.471689    2276 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-pause-471400" in "kube-system" namespace to be "Ready" ...
	I0317 13:17:34.481936    2276 pod_ready.go:93] pod "kube-controller-manager-pause-471400" in "kube-system" namespace has status "Ready":"True"
	I0317 13:17:34.481936    2276 pod_ready.go:82] duration metric: took 1.0102358s for pod "kube-controller-manager-pause-471400" in "kube-system" namespace to be "Ready" ...
	I0317 13:17:34.481936    2276 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-2w5n2" in "kube-system" namespace to be "Ready" ...
	I0317 13:17:34.489466    2276 pod_ready.go:93] pod "kube-proxy-2w5n2" in "kube-system" namespace has status "Ready":"True"
	I0317 13:17:34.489466    2276 pod_ready.go:82] duration metric: took 7.5302ms for pod "kube-proxy-2w5n2" in "kube-system" namespace to be "Ready" ...
	I0317 13:17:34.489466    2276 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-pause-471400" in "kube-system" namespace to be "Ready" ...
	I0317 13:17:34.496080    2276 pod_ready.go:93] pod "kube-scheduler-pause-471400" in "kube-system" namespace has status "Ready":"True"
	I0317 13:17:34.496080    2276 pod_ready.go:82] duration metric: took 6.6138ms for pod "kube-scheduler-pause-471400" in "kube-system" namespace to be "Ready" ...
	I0317 13:17:34.496080    2276 pod_ready.go:39] duration metric: took 12.0541671s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 13:17:34.496614    2276 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0317 13:17:34.517447    2276 ops.go:34] apiserver oom_adj: -16
	I0317 13:17:34.517518    2276 kubeadm.go:597] duration metric: took 22.791273s to restartPrimaryControlPlane
	I0317 13:17:34.517564    2276 kubeadm.go:394] duration metric: took 22.8634642s to StartCluster
	I0317 13:17:34.517564    2276 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:17:34.517708    2276 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0317 13:17:34.519537    2276 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:17:34.521209    2276 start.go:235] Will wait 6m0s for node &{Name: IP:172.25.31.3 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0317 13:17:34.521209    2276 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0317 13:17:34.521209    2276 config.go:182] Loaded profile config "pause-471400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0317 13:17:34.526762    2276 out.go:177] * Verifying Kubernetes components...
	I0317 13:17:34.529267    2276 out.go:177] * Enabled addons: 
	I0317 13:17:34.535267    2276 addons.go:514] duration metric: took 14.0579ms for enable addons: enabled=[]
	I0317 13:17:34.543716    2276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:17:34.855703    2276 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 13:17:34.884172    2276 node_ready.go:35] waiting up to 6m0s for node "pause-471400" to be "Ready" ...
	I0317 13:17:34.888852    2276 node_ready.go:49] node "pause-471400" has status "Ready":"True"
	I0317 13:17:34.888852    2276 node_ready.go:38] duration metric: took 4.68ms for node "pause-471400" to be "Ready" ...
	I0317 13:17:34.888852    2276 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 13:17:34.894067    2276 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-2xpj4" in "kube-system" namespace to be "Ready" ...
	I0317 13:17:34.901365    2276 pod_ready.go:93] pod "coredns-668d6bf9bc-2xpj4" in "kube-system" namespace has status "Ready":"True"
	I0317 13:17:34.901365    2276 pod_ready.go:82] duration metric: took 7.2982ms for pod "coredns-668d6bf9bc-2xpj4" in "kube-system" namespace to be "Ready" ...
	I0317 13:17:34.901365    2276 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-471400" in "kube-system" namespace to be "Ready" ...
	I0317 13:17:35.153071    2276 pod_ready.go:93] pod "etcd-pause-471400" in "kube-system" namespace has status "Ready":"True"
	I0317 13:17:35.153071    2276 pod_ready.go:82] duration metric: took 251.7026ms for pod "etcd-pause-471400" in "kube-system" namespace to be "Ready" ...
	I0317 13:17:35.153071    2276 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-471400" in "kube-system" namespace to be "Ready" ...
	I0317 13:17:35.553894    2276 pod_ready.go:93] pod "kube-apiserver-pause-471400" in "kube-system" namespace has status "Ready":"True"
	I0317 13:17:35.553894    2276 pod_ready.go:82] duration metric: took 400.8191ms for pod "kube-apiserver-pause-471400" in "kube-system" namespace to be "Ready" ...
	I0317 13:17:35.553894    2276 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-471400" in "kube-system" namespace to be "Ready" ...
	I0317 13:17:35.952366    2276 pod_ready.go:93] pod "kube-controller-manager-pause-471400" in "kube-system" namespace has status "Ready":"True"
	I0317 13:17:35.952366    2276 pod_ready.go:82] duration metric: took 398.4672ms for pod "kube-controller-manager-pause-471400" in "kube-system" namespace to be "Ready" ...
	I0317 13:17:35.952447    2276 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2w5n2" in "kube-system" namespace to be "Ready" ...
	I0317 13:17:36.357494    2276 pod_ready.go:93] pod "kube-proxy-2w5n2" in "kube-system" namespace has status "Ready":"True"
	I0317 13:17:36.357582    2276 pod_ready.go:82] duration metric: took 405.1305ms for pod "kube-proxy-2w5n2" in "kube-system" namespace to be "Ready" ...
	I0317 13:17:36.357582    2276 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-471400" in "kube-system" namespace to be "Ready" ...
	I0317 13:17:36.752198    2276 pod_ready.go:93] pod "kube-scheduler-pause-471400" in "kube-system" namespace has status "Ready":"True"
	I0317 13:17:36.752327    2276 pod_ready.go:82] duration metric: took 394.7403ms for pod "kube-scheduler-pause-471400" in "kube-system" namespace to be "Ready" ...
	I0317 13:17:36.752327    2276 pod_ready.go:39] duration metric: took 1.8634543s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 13:17:36.752327    2276 api_server.go:52] waiting for apiserver process to appear ...
	I0317 13:17:36.764599    2276 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:17:36.800961    2276 api_server.go:72] duration metric: took 2.2797264s to wait for apiserver process to appear ...
	I0317 13:17:36.801100    2276 api_server.go:88] waiting for apiserver healthz status ...
	I0317 13:17:36.801100    2276 api_server.go:253] Checking apiserver healthz at https://172.25.31.3:8443/healthz ...
	I0317 13:17:36.808627    2276 api_server.go:279] https://172.25.31.3:8443/healthz returned 200:
	ok
	I0317 13:17:36.811190    2276 api_server.go:141] control plane version: v1.32.2
	I0317 13:17:36.811234    2276 api_server.go:131] duration metric: took 10.1339ms to wait for apiserver health ...
	I0317 13:17:36.811234    2276 system_pods.go:43] waiting for kube-system pods to appear ...
	I0317 13:17:36.952088    2276 system_pods.go:59] 6 kube-system pods found
	I0317 13:17:36.952088    2276 system_pods.go:61] "coredns-668d6bf9bc-2xpj4" [704a1878-5d2f-4871-98ac-ced7ddfbc684] Running
	I0317 13:17:36.952088    2276 system_pods.go:61] "etcd-pause-471400" [98e4a9fc-1ef0-4a40-a394-634314ddd363] Running
	I0317 13:17:36.952088    2276 system_pods.go:61] "kube-apiserver-pause-471400" [032360c0-bb5e-497f-ac77-134a17fab99f] Running
	I0317 13:17:36.952088    2276 system_pods.go:61] "kube-controller-manager-pause-471400" [b982ef81-1c85-4fd8-838b-2b8bbf1993d5] Running
	I0317 13:17:36.952088    2276 system_pods.go:61] "kube-proxy-2w5n2" [d2be3017-491d-427e-982e-7fcdf387b94a] Running
	I0317 13:17:36.952088    2276 system_pods.go:61] "kube-scheduler-pause-471400" [0a95a12f-a384-429a-93e0-8c27dbbe9c3f] Running
	I0317 13:17:36.952088    2276 system_pods.go:74] duration metric: took 140.8525ms to wait for pod list to return data ...
	I0317 13:17:36.952088    2276 default_sa.go:34] waiting for default service account to be created ...
	I0317 13:17:37.153579    2276 default_sa.go:45] found service account: "default"
	I0317 13:17:37.153743    2276 default_sa.go:55] duration metric: took 201.6532ms for default service account to be created ...
	I0317 13:17:37.153743    2276 system_pods.go:116] waiting for k8s-apps to be running ...
	I0317 13:17:37.354555    2276 system_pods.go:86] 6 kube-system pods found
	I0317 13:17:37.354555    2276 system_pods.go:89] "coredns-668d6bf9bc-2xpj4" [704a1878-5d2f-4871-98ac-ced7ddfbc684] Running
	I0317 13:17:37.354555    2276 system_pods.go:89] "etcd-pause-471400" [98e4a9fc-1ef0-4a40-a394-634314ddd363] Running
	I0317 13:17:37.354555    2276 system_pods.go:89] "kube-apiserver-pause-471400" [032360c0-bb5e-497f-ac77-134a17fab99f] Running
	I0317 13:17:37.354555    2276 system_pods.go:89] "kube-controller-manager-pause-471400" [b982ef81-1c85-4fd8-838b-2b8bbf1993d5] Running
	I0317 13:17:37.354555    2276 system_pods.go:89] "kube-proxy-2w5n2" [d2be3017-491d-427e-982e-7fcdf387b94a] Running
	I0317 13:17:37.354555    2276 system_pods.go:89] "kube-scheduler-pause-471400" [0a95a12f-a384-429a-93e0-8c27dbbe9c3f] Running
	I0317 13:17:37.354555    2276 system_pods.go:126] duration metric: took 200.8097ms to wait for k8s-apps to be running ...
	I0317 13:17:37.354555    2276 system_svc.go:44] waiting for kubelet service to be running ....
	I0317 13:17:37.369871    2276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0317 13:17:37.397546    2276 system_svc.go:56] duration metric: took 42.9902ms WaitForService to wait for kubelet
	I0317 13:17:37.397692    2276 kubeadm.go:582] duration metric: took 2.8764504s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0317 13:17:37.397692    2276 node_conditions.go:102] verifying NodePressure condition ...
	I0317 13:17:37.551807    2276 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0317 13:17:37.551807    2276 node_conditions.go:123] node cpu capacity is 2
	I0317 13:17:37.551807    2276 node_conditions.go:105] duration metric: took 154.1133ms to run NodePressure ...
	I0317 13:17:37.551807    2276 start.go:241] waiting for startup goroutines ...
	I0317 13:17:37.551807    2276 start.go:246] waiting for cluster config update ...
	I0317 13:17:37.551807    2276 start.go:255] writing updated cluster config ...
	I0317 13:17:37.567501    2276 ssh_runner.go:195] Run: rm -f paused
	I0317 13:17:37.733467    2276 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0317 13:17:37.745874    2276 out.go:177] * Done! kubectl is now configured to use "pause-471400" cluster and "default" namespace by default
	I0317 13:17:35.801350    8032 main.go:141] libmachine: [stdout =====>] : 172.25.31.15
	
	I0317 13:17:35.801350    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:17:35.802069    8032 sshutil.go:53] new ssh client: &{IP:172.25.31.15 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\kubernetes-upgrade-816300\id_rsa Username:docker}
	I0317 13:17:35.910924    8032 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9531678s)
	I0317 13:17:35.922163    8032 ssh_runner.go:195] Run: cat /etc/os-release
	I0317 13:17:35.928978    8032 info.go:137] Remote host: Buildroot 2023.02.9
	I0317 13:17:35.928978    8032 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0317 13:17:35.929473    8032 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0317 13:17:35.930571    8032 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem -> 89402.pem in /etc/ssl/certs
	I0317 13:17:35.942242    8032 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0317 13:17:35.970491    8032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem --> /etc/ssl/certs/89402.pem (1708 bytes)
	I0317 13:17:36.018429    8032 start.go:296] duration metric: took 5.0726005s for postStartSetup
	I0317 13:17:36.018553    8032 fix.go:56] duration metric: took 53.2698313s for fixHost
	I0317 13:17:36.018614    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-816300 ).state
	I0317 13:17:38.392182    8032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:17:38.392182    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:17:38.393076    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-816300 ).networkadapters[0]).ipaddresses[0]
	I0317 13:17:41.217896    8032 main.go:141] libmachine: [stdout =====>] : 172.25.31.15
	
	I0317 13:17:41.217896    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:17:41.224130    8032 main.go:141] libmachine: Using SSH client type: native
	I0317 13:17:41.224855    8032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.31.15 22 <nil> <nil>}
	I0317 13:17:41.224923    8032 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0317 13:17:41.362553    8032 main.go:141] libmachine: SSH cmd err, output: <nil>: 1742217461.386042620
	
	I0317 13:17:41.362553    8032 fix.go:216] guest clock: 1742217461.386042620
	I0317 13:17:41.362553    8032 fix.go:229] Guest: 2025-03-17 13:17:41.38604262 +0000 UTC Remote: 2025-03-17 13:17:36.0185533 +0000 UTC m=+316.796178401 (delta=5.36748932s)
	I0317 13:17:41.362553    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-816300 ).state
	I0317 13:17:43.686366    8032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:17:43.686366    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:17:43.686716    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-816300 ).networkadapters[0]).ipaddresses[0]
	I0317 13:17:46.565761    7220 start.go:364] duration metric: took 2m26.9697056s to acquireMachinesLock for "docker-flags-664100"
	I0317 13:17:46.566068    7220 start.go:93] Provisioning new machine with config: &{Name:docker-flags-664100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.32.2 ClusterName:docker-flags-664100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0317 13:17:46.566290    7220 start.go:125] createHost starting for "" (driver="hyperv")
	I0317 13:17:46.570469    7220 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0317 13:17:46.571031    7220 start.go:159] libmachine.API.Create for "docker-flags-664100" (driver="hyperv")
	I0317 13:17:46.571136    7220 client.go:168] LocalClient.Create starting
	I0317 13:17:46.572389    7220 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0317 13:17:46.572450    7220 main.go:141] libmachine: Decoding PEM data...
	I0317 13:17:46.572450    7220 main.go:141] libmachine: Parsing certificate...
	I0317 13:17:46.572450    7220 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0317 13:17:46.573116    7220 main.go:141] libmachine: Decoding PEM data...
	I0317 13:17:46.573116    7220 main.go:141] libmachine: Parsing certificate...
	I0317 13:17:46.573116    7220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0317 13:17:48.746053    7220 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0317 13:17:48.746511    7220 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:17:48.746599    7220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0317 13:17:46.407812    8032 main.go:141] libmachine: [stdout =====>] : 172.25.31.15
	
	I0317 13:17:46.407812    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:17:46.413158    8032 main.go:141] libmachine: Using SSH client type: native
	I0317 13:17:46.413798    8032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.31.15 22 <nil> <nil>}
	I0317 13:17:46.413798    8032 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1742217461
	I0317 13:17:46.565085    8032 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Mar 17 13:17:41 UTC 2025
	
	I0317 13:17:46.565182    8032 fix.go:236] clock set: Mon Mar 17 13:17:41 UTC 2025
	 (err=<nil>)
	I0317 13:17:46.565182    8032 start.go:83] releasing machines lock for "kubernetes-upgrade-816300", held for 1m3.8166881s
	I0317 13:17:46.565539    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-816300 ).state
	I0317 13:17:49.002566    8032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:17:49.002566    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:17:49.002648    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-816300 ).networkadapters[0]).ipaddresses[0]
	I0317 13:17:50.720223    7220 main.go:141] libmachine: [stdout =====>] : False
	
	I0317 13:17:50.720626    7220 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:17:50.720734    7220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0317 13:17:52.423871    7220 main.go:141] libmachine: [stdout =====>] : True
	
	I0317 13:17:52.424086    7220 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:17:52.424156    7220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0317 13:17:51.880887    8032 main.go:141] libmachine: [stdout =====>] : 172.25.31.15
	
	I0317 13:17:51.880887    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:17:51.885854    8032 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0317 13:17:51.885854    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-816300 ).state
	I0317 13:17:51.898859    8032 ssh_runner.go:195] Run: cat /version.json
	I0317 13:17:51.899868    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-816300 ).state
	I0317 13:17:54.421191    8032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:17:54.421583    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:17:54.421728    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-816300 ).networkadapters[0]).ipaddresses[0]
	I0317 13:17:56.909392    7220 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0317 13:17:56.909529    7220 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:17:56.913267    7220 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0317 13:17:57.442322    7220 main.go:141] libmachine: Creating SSH key...
	I0317 13:17:57.710334    7220 main.go:141] libmachine: Creating VM...
	I0317 13:17:57.710334    7220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0317 13:17:54.438861    8032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:17:54.438861    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:17:54.438861    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-816300 ).networkadapters[0]).ipaddresses[0]
	I0317 13:17:57.332710    8032 main.go:141] libmachine: [stdout =====>] : 172.25.31.15
	
	I0317 13:17:57.332710    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:17:57.333836    8032 sshutil.go:53] new ssh client: &{IP:172.25.31.15 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\kubernetes-upgrade-816300\id_rsa Username:docker}
	I0317 13:17:57.369512    8032 main.go:141] libmachine: [stdout =====>] : 172.25.31.15
	
	I0317 13:17:57.369623    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:17:57.369715    8032 sshutil.go:53] new ssh client: &{IP:172.25.31.15 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\kubernetes-upgrade-816300\id_rsa Username:docker}
	I0317 13:17:57.447051    8032 ssh_runner.go:235] Completed: cat /version.json: (5.5481296s)
	I0317 13:17:57.460709    8032 ssh_runner.go:195] Run: systemctl --version
	I0317 13:17:57.466105    8032 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.5801146s)
	W0317 13:17:57.466172    8032 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0317 13:17:57.490471    8032 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0317 13:17:57.500781    8032 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0317 13:17:57.513044    8032 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0317 13:17:57.546333    8032 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	W0317 13:17:57.579874    8032 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0317 13:17:57.579874    8032 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0317 13:17:57.585318    8032 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0317 13:17:57.585318    8032 start.go:495] detecting cgroup driver to use...
	I0317 13:17:57.585318    8032 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0317 13:17:57.640370    8032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0317 13:17:57.675278    8032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0317 13:17:57.695285    8032 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0317 13:17:57.705286    8032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0317 13:17:57.742030    8032 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0317 13:17:57.777181    8032 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0317 13:17:57.820814    8032 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0317 13:17:57.857280    8032 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0317 13:17:57.901296    8032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0317 13:17:57.942785    8032 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0317 13:17:57.982121    8032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0317 13:17:58.017874    8032 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0317 13:17:58.054040    8032 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0317 13:17:58.095195    8032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:17:58.413025    8032 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0317 13:17:58.451641    8032 start.go:495] detecting cgroup driver to use...
	I0317 13:17:58.465652    8032 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0317 13:17:58.529076    8032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0317 13:17:58.570640    8032 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0317 13:17:58.626538    8032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0317 13:17:58.662958    8032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0317 13:17:58.690221    8032 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0317 13:17:58.738088    8032 ssh_runner.go:195] Run: which cri-dockerd
	I0317 13:17:58.758481    8032 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0317 13:17:58.779603    8032 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0317 13:17:58.833624    8032 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0317 13:17:59.146106    8032 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0317 13:18:01.166377    7220 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0317 13:18:01.166435    7220 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:18:01.166708    7220 main.go:141] libmachine: Using switch "Default Switch"
	I0317 13:18:01.166827    7220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0317 13:18:02.999476    7220 main.go:141] libmachine: [stdout =====>] : True
	
	I0317 13:18:02.999476    7220 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:18:03.000256    7220 main.go:141] libmachine: Creating VHD
	I0317 13:18:03.000256    7220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\docker-flags-664100\fixed.vhd' -SizeBytes 10MB -Fixed
	I0317 13:17:59.431162    8032 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0317 13:17:59.431455    8032 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0317 13:17:59.480093    8032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:17:59.800249    8032 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0317 13:18:07.000287    7220 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\docker-flags-664100\fixed.
	                          vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 97CF4DDA-928C-42E4-BCB3-D3451FC3FCD8
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0317 13:18:07.001272    7220 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:18:07.001272    7220 main.go:141] libmachine: Writing magic tar header
	I0317 13:18:07.001415    7220 main.go:141] libmachine: Writing SSH key tar header
	I0317 13:18:07.014756    7220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\docker-flags-664100\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\docker-flags-664100\disk.vhd' -VHDType Dynamic -DeleteSource
	I0317 13:18:10.351982    7220 main.go:141] libmachine: [stdout =====>] : 
	I0317 13:18:10.352821    7220 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:18:10.352915    7220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\docker-flags-664100\disk.vhd' -SizeBytes 20000MB
	I0317 13:18:13.066971    7220 main.go:141] libmachine: [stdout =====>] : 
	I0317 13:18:13.067124    7220 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:18:13.067206    7220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM docker-flags-664100 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\docker-flags-664100' -SwitchName 'Default Switch' -MemoryStartupBytes 2048MB
	I0317 13:18:12.877539    8032 ssh_runner.go:235] Completed: sudo systemctl restart docker: (13.0770378s)
	I0317 13:18:12.896017    8032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0317 13:18:12.955589    8032 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0317 13:18:13.012159    8032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0317 13:18:13.072903    8032 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0317 13:18:13.340896    8032 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0317 13:18:13.573608    8032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:18:13.829786    8032 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0317 13:18:13.875203    8032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0317 13:18:13.919615    8032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:18:14.187751    8032 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0317 13:18:14.329293    8032 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0317 13:18:14.342413    8032 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0317 13:18:14.354711    8032 start.go:563] Will wait 60s for crictl version
	I0317 13:18:14.366874    8032 ssh_runner.go:195] Run: which crictl
	I0317 13:18:14.383360    8032 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0317 13:18:14.442622    8032 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0317 13:18:14.452246    8032 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0317 13:18:14.501000    8032 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0317 13:18:17.267362    7220 main.go:141] libmachine: [stdout =====>] : 
	Name                State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----                ----- ----------- ----------------- ------   ------             -------
	docker-flags-664100 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0317 13:18:17.267362    7220 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:18:17.267362    7220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName docker-flags-664100 -DynamicMemoryEnabled $false
	I0317 13:18:14.556142    8032 out.go:235] * Preparing Kubernetes v1.32.2 on Docker 27.4.0 ...
	I0317 13:18:14.557103    8032 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0317 13:18:14.561113    8032 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0317 13:18:14.561113    8032 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0317 13:18:14.561113    8032 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0317 13:18:14.561113    8032 ip.go:211] Found interface: {Index:13 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:4b:84:d5 Flags:up|broadcast|multicast|running}
	I0317 13:18:14.565200    8032 ip.go:214] interface addr: fe80::f0c7:c31c:6237:ef35/64
	I0317 13:18:14.565340    8032 ip.go:214] interface addr: 172.25.16.1/20
	I0317 13:18:14.577182    8032 ssh_runner.go:195] Run: grep 172.25.16.1	host.minikube.internal$ /etc/hosts
	I0317 13:18:14.585243    8032 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-816300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ku
bernetes-upgrade-816300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.31.15 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0317 13:18:14.585517    8032 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0317 13:18:14.594567    8032 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0317 13:18:14.627206    8032 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.32.2
	registry.k8s.io/kube-proxy:v1.32.2
	registry.k8s.io/kube-controller-manager:v1.32.2
	registry.k8s.io/kube-scheduler:v1.32.2
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-proxy:v1.20.0
	k8s.gcr.io/kube-scheduler:v1.20.0
	k8s.gcr.io/kube-controller-manager:v1.20.0
	k8s.gcr.io/kube-apiserver:v1.20.0
	k8s.gcr.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	k8s.gcr.io/pause:3.2
	
	-- /stdout --
	I0317 13:18:14.627276    8032 docker.go:619] Images already preloaded, skipping extraction
	I0317 13:18:14.636447    8032 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0317 13:18:14.682993    8032 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.32.2
	registry.k8s.io/kube-controller-manager:v1.32.2
	registry.k8s.io/kube-proxy:v1.32.2
	registry.k8s.io/kube-scheduler:v1.32.2
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-proxy:v1.20.0
	k8s.gcr.io/kube-controller-manager:v1.20.0
	k8s.gcr.io/kube-scheduler:v1.20.0
	k8s.gcr.io/kube-apiserver:v1.20.0
	k8s.gcr.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	k8s.gcr.io/pause:3.2
	
	-- /stdout --
	I0317 13:18:14.683057    8032 cache_images.go:84] Images are preloaded, skipping loading
	I0317 13:18:14.683115    8032 kubeadm.go:934] updating node { 172.25.31.15 8443 v1.32.2 docker true true} ...
	I0317 13:18:14.683480    8032 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-816300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.31.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:kubernetes-upgrade-816300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0317 13:18:14.695220    8032 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0317 13:18:14.771989    8032 cni.go:84] Creating CNI manager for ""
	I0317 13:18:14.772063    8032 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0317 13:18:14.772150    8032 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0317 13:18:14.772150    8032 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.25.31.15 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-816300 NodeName:kubernetes-upgrade-816300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.25.31.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.25.31.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0317 13:18:14.772565    8032 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.25.31.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "kubernetes-upgrade-816300"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "172.25.31.15"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.25.31.15"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0317 13:18:14.784343    8032 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0317 13:18:14.803518    8032 binaries.go:44] Found k8s binaries, skipping transfer
	I0317 13:18:14.816380    8032 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0317 13:18:14.836728    8032 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0317 13:18:14.869634    8032 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0317 13:18:14.906219    8032 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2303 bytes)
	I0317 13:18:14.954427    8032 ssh_runner.go:195] Run: grep 172.25.31.15	control-plane.minikube.internal$ /etc/hosts
	I0317 13:18:14.977053    8032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:18:15.257624    8032 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 13:18:15.303878    8032 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\kubernetes-upgrade-816300 for IP: 172.25.31.15
	I0317 13:18:15.303985    8032 certs.go:194] generating shared ca certs ...
	I0317 13:18:15.303985    8032 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:18:15.304844    8032 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0317 13:18:15.305276    8032 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0317 13:18:15.305532    8032 certs.go:256] generating profile certs ...
	I0317 13:18:15.305974    8032 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\kubernetes-upgrade-816300\client.key
	I0317 13:18:15.307469    8032 certs.go:359] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\kubernetes-upgrade-816300\apiserver.key.e431048a
	I0317 13:18:15.308801    8032 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\kubernetes-upgrade-816300\proxy-client.key
	I0317 13:18:15.311819    8032 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8940.pem (1338 bytes)
	W0317 13:18:15.311819    8032 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8940_empty.pem, impossibly tiny 0 bytes
	I0317 13:18:15.311819    8032 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0317 13:18:15.312882    8032 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0317 13:18:15.313217    8032 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0317 13:18:15.313217    8032 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0317 13:18:15.314072    8032 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem (1708 bytes)
	I0317 13:18:15.316827    8032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0317 13:18:15.400199    8032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0317 13:18:15.462425    8032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0317 13:18:15.526167    8032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0317 13:18:15.597174    8032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\kubernetes-upgrade-816300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0317 13:18:15.661787    8032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\kubernetes-upgrade-816300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0317 13:18:15.735925    8032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\kubernetes-upgrade-816300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0317 13:18:15.798431    8032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\kubernetes-upgrade-816300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0317 13:18:15.855651    8032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem --> /usr/share/ca-certificates/89402.pem (1708 bytes)
	I0317 13:18:15.924382    8032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0317 13:18:15.994691    8032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8940.pem --> /usr/share/ca-certificates/8940.pem (1338 bytes)
	I0317 13:18:16.114460    8032 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0317 13:18:16.210251    8032 ssh_runner.go:195] Run: openssl version
	I0317 13:18:16.233257    8032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/89402.pem && ln -fs /usr/share/ca-certificates/89402.pem /etc/ssl/certs/89402.pem"
	I0317 13:18:16.328252    8032 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/89402.pem
	I0317 13:18:16.348691    8032 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 17 10:46 /usr/share/ca-certificates/89402.pem
	I0317 13:18:16.361948    8032 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/89402.pem
	I0317 13:18:16.382921    8032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/89402.pem /etc/ssl/certs/3ec20f2e.0"
	I0317 13:18:16.425587    8032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0317 13:18:16.496799    8032 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:18:16.509613    8032 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 17 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:18:16.530149    8032 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:18:16.553931    8032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0317 13:18:16.597941    8032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8940.pem && ln -fs /usr/share/ca-certificates/8940.pem /etc/ssl/certs/8940.pem"
	I0317 13:18:16.647365    8032 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8940.pem
	I0317 13:18:16.655751    8032 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 17 10:46 /usr/share/ca-certificates/8940.pem
	I0317 13:18:16.671372    8032 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8940.pem
	I0317 13:18:16.699097    8032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8940.pem /etc/ssl/certs/51391683.0"
	I0317 13:18:16.736011    8032 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0317 13:18:16.757205    8032 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0317 13:18:16.783467    8032 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0317 13:18:16.808086    8032 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0317 13:18:16.833166    8032 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0317 13:18:16.870359    8032 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0317 13:18:16.922298    8032 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0317 13:18:16.938398    8032 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-816300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:kuber
netes-upgrade-816300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.31.15 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:doc
ker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 13:18:16.950342    8032 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0317 13:18:17.004467    8032 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0317 13:18:17.077298    8032 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0317 13:18:17.077360    8032 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0317 13:18:17.090135    8032 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0317 13:18:17.140276    8032 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0317 13:18:17.142058    8032 kubeconfig.go:125] found "kubernetes-upgrade-816300" server: "https://172.25.31.15:8443"
	I0317 13:18:17.144483    8032 kapi.go:59] client config for kubernetes-upgrade-816300: &rest.Config{Host:"https://172.25.31.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\kubernetes-upgrade-816300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\kubernetes-upgrade-816300\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil),
KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2e292e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0317 13:18:17.147448    8032 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0317 13:18:17.147448    8032 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0317 13:18:17.147448    8032 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0317 13:18:17.147539    8032 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0317 13:18:17.159042    8032 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0317 13:18:17.190122    8032 kubeadm.go:630] The running cluster does not require reconfiguration: 172.25.31.15
	I0317 13:18:17.190183    8032 kubeadm.go:1160] stopping kube-system containers ...
	I0317 13:18:17.201190    8032 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0317 13:18:17.351159    8032 docker.go:483] Stopping containers: [db863eea0649 eb0e141e3644 c2a5fddd2f2a bd5d9054ca0b 3508290bdfea 01c8f83a4fd4 6af969732bf3 3dbb8ea1a155 1fb48ef6e80d 713e8fb828e6 9ddab5c27dbb 68768033f15d d7acb494b3ff 85d7e55ce335 702e9352f569 5327df366234 eb495adde413 351f7d8503a5 92bf9e018585 cedc91461303 a1006dd94a28 17cd71ec5f1e c3733c574b1a 43527f4438e5 9944f7f82ecf d62da149d7c0 3e461d162750 b1497d98354d c0d14d1532b2]
	I0317 13:18:17.362464    8032 ssh_runner.go:195] Run: docker stop db863eea0649 eb0e141e3644 c2a5fddd2f2a bd5d9054ca0b 3508290bdfea 01c8f83a4fd4 6af969732bf3 3dbb8ea1a155 1fb48ef6e80d 713e8fb828e6 9ddab5c27dbb 68768033f15d d7acb494b3ff 85d7e55ce335 702e9352f569 5327df366234 eb495adde413 351f7d8503a5 92bf9e018585 cedc91461303 a1006dd94a28 17cd71ec5f1e c3733c574b1a 43527f4438e5 9944f7f82ecf d62da149d7c0 3e461d162750 b1497d98354d c0d14d1532b2
	
	
	==> Docker <==
	Mar 17 13:17:16 pause-471400 dockerd[4790]: time="2025-03-17T13:17:16.161709711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 17 13:17:16 pause-471400 dockerd[4790]: time="2025-03-17T13:17:16.162187517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 17 13:17:16 pause-471400 dockerd[4790]: time="2025-03-17T13:17:16.204048953Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 17 13:17:16 pause-471400 dockerd[4790]: time="2025-03-17T13:17:16.204206355Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 17 13:17:16 pause-471400 dockerd[4790]: time="2025-03-17T13:17:16.204220456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 17 13:17:16 pause-471400 dockerd[4790]: time="2025-03-17T13:17:16.204318457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 17 13:17:19 pause-471400 cri-dockerd[5064]: time="2025-03-17T13:17:19Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Mar 17 13:17:21 pause-471400 dockerd[4790]: time="2025-03-17T13:17:21.245299358Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 17 13:17:21 pause-471400 dockerd[4790]: time="2025-03-17T13:17:21.245757263Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 17 13:17:21 pause-471400 dockerd[4790]: time="2025-03-17T13:17:21.245876365Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 17 13:17:21 pause-471400 dockerd[4790]: time="2025-03-17T13:17:21.246762575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 17 13:17:21 pause-471400 dockerd[4790]: time="2025-03-17T13:17:21.249588607Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 17 13:17:21 pause-471400 dockerd[4790]: time="2025-03-17T13:17:21.249664208Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 17 13:17:21 pause-471400 dockerd[4790]: time="2025-03-17T13:17:21.249680108Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 17 13:17:21 pause-471400 dockerd[4790]: time="2025-03-17T13:17:21.249960812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 17 13:17:21 pause-471400 cri-dockerd[5064]: time="2025-03-17T13:17:21Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e24eb091f5348b0ae125306f5a32c689a643271dc3d4455fa127281466cc5bc0/resolv.conf as [nameserver 172.25.16.1]"
	Mar 17 13:17:21 pause-471400 cri-dockerd[5064]: time="2025-03-17T13:17:21Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f960f21183ccec1babada0111abfee90aa2dcdcdb68df584cc369e1e1372f515/resolv.conf as [nameserver 172.25.16.1]"
	Mar 17 13:17:21 pause-471400 dockerd[4790]: time="2025-03-17T13:17:21.632997819Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 17 13:17:21 pause-471400 dockerd[4790]: time="2025-03-17T13:17:21.633108620Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 17 13:17:21 pause-471400 dockerd[4790]: time="2025-03-17T13:17:21.633169821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 17 13:17:21 pause-471400 dockerd[4790]: time="2025-03-17T13:17:21.633345022Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 17 13:17:21 pause-471400 dockerd[4790]: time="2025-03-17T13:17:21.953291775Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 17 13:17:21 pause-471400 dockerd[4790]: time="2025-03-17T13:17:21.953667879Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 17 13:17:21 pause-471400 dockerd[4790]: time="2025-03-17T13:17:21.953767180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 17 13:17:21 pause-471400 dockerd[4790]: time="2025-03-17T13:17:21.954143984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	30a118f025650       c69fa2e9cbf5f       About a minute ago   Running             coredns                   1                   f960f21183cce       coredns-668d6bf9bc-2xpj4
	ed4022dcab640       f1332858868e1       About a minute ago   Running             kube-proxy                1                   e24eb091f5348       kube-proxy-2w5n2
	384fe5c06f2df       85b7a174738ba       About a minute ago   Running             kube-apiserver            1                   fd2fa1dccb7a9       kube-apiserver-pause-471400
	e881d41583c35       a9e7e6b294baf       About a minute ago   Running             etcd                      1                   0d9f0b8d8c9e1       etcd-pause-471400
	9b55e85d3d127       b6a454c5a800d       About a minute ago   Running             kube-controller-manager   1                   f17938a9116db       kube-controller-manager-pause-471400
	c57a64a068ec0       d8e673e7c9983       About a minute ago   Running             kube-scheduler            1                   b42e93259bac8       kube-scheduler-pause-471400
	d4f557bada235       c69fa2e9cbf5f       7 minutes ago        Exited              coredns                   0                   c0ee58f77451c       coredns-668d6bf9bc-2xpj4
	bec7db06f9e97       f1332858868e1       7 minutes ago        Exited              kube-proxy                0                   0b09a5de1f0b4       kube-proxy-2w5n2
	cecd0f7a3b605       a9e7e6b294baf       8 minutes ago        Exited              etcd                      0                   587f0dda7141f       etcd-pause-471400
	48490bf5143cc       d8e673e7c9983       8 minutes ago        Exited              kube-scheduler            0                   c59c53abe2ddc       kube-scheduler-pause-471400
	a7133843d6ed6       b6a454c5a800d       8 minutes ago        Exited              kube-controller-manager   0                   a77930f3d721b       kube-controller-manager-pause-471400
	5d96f9d335dfb       85b7a174738ba       8 minutes ago        Exited              kube-apiserver            0                   4a105f3090f3f       kube-apiserver-pause-471400
	
	
	==> coredns [30a118f02565] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1d9c7cb05d14a915f04974bf55cf5686cd43414eb293ac9a790a39f065db1c589d13dfd7b12923475c8499a18e0bdc26041d87eeb9e9602ff2cbbc57da44e2c0
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:45307 - 28259 "HINFO IN 85235851623009837.2427239534048236081. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.552204713s
	
	
	==> coredns [d4f557bada23] <==
	[INFO] plugin/reload: Running configuration SHA512 = 1d9c7cb05d14a915f04974bf55cf5686cd43414eb293ac9a790a39f065db1c589d13dfd7b12923475c8499a18e0bdc26041d87eeb9e9602ff2cbbc57da44e2c0
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:46201 - 39183 "HINFO IN 6984254641872043389.8388004187190449982. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.029247159s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1613281540]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Mar-2025 13:10:33.338) (total time: 30005ms):
	Trace[1613281540]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30004ms (13:11:03.342)
	Trace[1613281540]: [30.005282978s] [30.005282978s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[11254756]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Mar-2025 13:10:33.337) (total time: 30006ms):
	Trace[11254756]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30004ms (13:11:03.342)
	Trace[11254756]: [30.006267481s] [30.006267481s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[805144725]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Mar-2025 13:10:33.342) (total time: 30002ms):
	Trace[805144725]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (13:11:03.345)
	Trace[805144725]: [30.002919579s] [30.002919579s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	
	
	==> dmesg <==
	[  +7.995793] systemd-fstab-generator[1857]: Ignoring "noauto" option for root device
	[  +0.124170] kauditd_printk_skb: 74 callbacks suppressed
	[  +8.557395] systemd-fstab-generator[2288]: Ignoring "noauto" option for root device
	[  +0.146230] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.717688] systemd-fstab-generator[2406]: Ignoring "noauto" option for root device
	[  +0.192992] kauditd_printk_skb: 12 callbacks suppressed
	[Mar17 13:11] kauditd_printk_skb: 67 callbacks suppressed
	[Mar17 13:16] systemd-fstab-generator[4346]: Ignoring "noauto" option for root device
	[  +0.705807] systemd-fstab-generator[4383]: Ignoring "noauto" option for root device
	[  +0.295876] systemd-fstab-generator[4395]: Ignoring "noauto" option for root device
	[  +0.305026] systemd-fstab-generator[4423]: Ignoring "noauto" option for root device
	[Mar17 13:17] kauditd_printk_skb: 89 callbacks suppressed
	[  +8.248518] systemd-fstab-generator[5013]: Ignoring "noauto" option for root device
	[  +0.223294] systemd-fstab-generator[5025]: Ignoring "noauto" option for root device
	[  +0.218731] systemd-fstab-generator[5037]: Ignoring "noauto" option for root device
	[  +0.311982] systemd-fstab-generator[5052]: Ignoring "noauto" option for root device
	[  +1.000949] systemd-fstab-generator[5222]: Ignoring "noauto" option for root device
	[  +0.127940] kauditd_printk_skb: 119 callbacks suppressed
	[  +3.709042] systemd-fstab-generator[5341]: Ignoring "noauto" option for root device
	[  +1.441375] kauditd_printk_skb: 39 callbacks suppressed
	[  +5.258211] kauditd_printk_skb: 25 callbacks suppressed
	[  +8.993660] kauditd_printk_skb: 25 callbacks suppressed
	[  +4.824841] systemd-fstab-generator[6220]: Ignoring "noauto" option for root device
	[ +11.225498] systemd-fstab-generator[6299]: Ignoring "noauto" option for root device
	[  +0.157997] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [cecd0f7a3b60] <==
	{"level":"warn","ts":"2025-03-17T13:11:50.338989Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"141.176536ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2965503412227013932 > lease_revoke:<id:292795a439fffcf0>","response":"size:27"}
	{"level":"info","ts":"2025-03-17T13:11:50.340172Z","caller":"traceutil/trace.go:171","msg":"trace[737798938] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:397; }","duration":"123.303543ms","start":"2025-03-17T13:11:50.216843Z","end":"2025-03-17T13:11:50.340146Z","steps":["trace[737798938] 'range keys from in-memory index tree'  (duration: 122.252738ms)"],"step_count":1}
	{"level":"info","ts":"2025-03-17T13:14:58.143162Z","caller":"traceutil/trace.go:171","msg":"trace[1053069170] transaction","detail":"{read_only:false; response_revision:453; number_of_response:1; }","duration":"146.517709ms","start":"2025-03-17T13:14:57.996622Z","end":"2025-03-17T13:14:58.143140Z","steps":["trace[1053069170] 'process raft request'  (duration: 146.386207ms)"],"step_count":1}
	{"level":"warn","ts":"2025-03-17T13:14:58.441752Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"225.383405ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-03-17T13:14:58.441939Z","caller":"traceutil/trace.go:171","msg":"trace[2100850140] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:453; }","duration":"225.593909ms","start":"2025-03-17T13:14:58.216329Z","end":"2025-03-17T13:14:58.441923Z","steps":["trace[2100850140] 'range keys from in-memory index tree'  (duration: 225.362605ms)"],"step_count":1}
	{"level":"info","ts":"2025-03-17T13:15:02.295694Z","caller":"traceutil/trace.go:171","msg":"trace[193129194] transaction","detail":"{read_only:false; response_revision:454; number_of_response:1; }","duration":"190.297679ms","start":"2025-03-17T13:15:02.105379Z","end":"2025-03-17T13:15:02.295677Z","steps":["trace[193129194] 'process raft request'  (duration: 189.894872ms)"],"step_count":1}
	{"level":"info","ts":"2025-03-17T13:15:04.977280Z","caller":"traceutil/trace.go:171","msg":"trace[1359934569] transaction","detail":"{read_only:false; response_revision:455; number_of_response:1; }","duration":"135.604477ms","start":"2025-03-17T13:15:04.841646Z","end":"2025-03-17T13:15:04.977251Z","steps":["trace[1359934569] 'process raft request'  (duration: 111.478489ms)","trace[1359934569] 'compare'  (duration: 24.016686ms)"],"step_count":2}
	{"level":"info","ts":"2025-03-17T13:15:10.999162Z","caller":"traceutil/trace.go:171","msg":"trace[258622077] transaction","detail":"{read_only:false; response_revision:457; number_of_response:1; }","duration":"119.593274ms","start":"2025-03-17T13:15:10.879547Z","end":"2025-03-17T13:15:10.999141Z","steps":["trace[258622077] 'process raft request'  (duration: 119.262269ms)"],"step_count":1}
	{"level":"warn","ts":"2025-03-17T13:15:11.391304Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"175.088434ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-03-17T13:15:11.391384Z","caller":"traceutil/trace.go:171","msg":"trace[828642724] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:457; }","duration":"175.189035ms","start":"2025-03-17T13:15:11.216180Z","end":"2025-03-17T13:15:11.391370Z","steps":["trace[828642724] 'range keys from in-memory index tree'  (duration: 175.078034ms)"],"step_count":1}
	{"level":"warn","ts":"2025-03-17T13:15:11.392058Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"114.990995ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-03-17T13:15:11.392173Z","caller":"traceutil/trace.go:171","msg":"trace[2034534485] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:457; }","duration":"115.137998ms","start":"2025-03-17T13:15:11.277024Z","end":"2025-03-17T13:15:11.392162Z","steps":["trace[2034534485] 'range keys from in-memory index tree'  (duration: 114.932794ms)"],"step_count":1}
	{"level":"warn","ts":"2025-03-17T13:16:00.101294Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.268497ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2965503412227014895 > lease_revoke:<id:292795a43a0000b1>","response":"size:27"}
	{"level":"warn","ts":"2025-03-17T13:16:05.353991Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.345465ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-03-17T13:16:05.354111Z","caller":"traceutil/trace.go:171","msg":"trace[472980301] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:475; }","duration":"136.488467ms","start":"2025-03-17T13:16:05.217606Z","end":"2025-03-17T13:16:05.354095Z","steps":["trace[472980301] 'range keys from in-memory index tree'  (duration: 136.329365ms)"],"step_count":1}
	{"level":"info","ts":"2025-03-17T13:16:55.404983Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-03-17T13:16:55.405138Z","caller":"embed/etcd.go:378","msg":"closing etcd server","name":"pause-471400","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://172.25.31.3:2380"],"advertise-client-urls":["https://172.25.31.3:2379"]}
	{"level":"warn","ts":"2025-03-17T13:16:55.405236Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-03-17T13:16:55.405337Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-03-17T13:16:55.495271Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 172.25.31.3:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-03-17T13:16:55.495354Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 172.25.31.3:2379: use of closed network connection"}
	{"level":"info","ts":"2025-03-17T13:16:55.495409Z","caller":"etcdserver/server.go:1543","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"5591e26a9a7b2927","current-leader-member-id":"5591e26a9a7b2927"}
	{"level":"info","ts":"2025-03-17T13:16:55.510191Z","caller":"embed/etcd.go:582","msg":"stopping serving peer traffic","address":"172.25.31.3:2380"}
	{"level":"info","ts":"2025-03-17T13:16:55.510521Z","caller":"embed/etcd.go:587","msg":"stopped serving peer traffic","address":"172.25.31.3:2380"}
	{"level":"info","ts":"2025-03-17T13:16:55.510537Z","caller":"embed/etcd.go:380","msg":"closed etcd server","name":"pause-471400","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://172.25.31.3:2380"],"advertise-client-urls":["https://172.25.31.3:2379"]}
	
	
	==> etcd [e881d41583c3] <==
	{"level":"info","ts":"2025-03-17T13:17:16.547055Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"594dfe8495e74bd1","local-member-id":"5591e26a9a7b2927","added-peer-id":"5591e26a9a7b2927","added-peer-peer-urls":["https://172.25.31.3:2380"]}
	{"level":"info","ts":"2025-03-17T13:17:16.547363Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"594dfe8495e74bd1","local-member-id":"5591e26a9a7b2927","cluster-version":"3.5"}
	{"level":"info","ts":"2025-03-17T13:17:16.548207Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-03-17T13:17:16.584738Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-03-17T13:17:16.592848Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"172.25.31.3:2380"}
	{"level":"info","ts":"2025-03-17T13:17:16.595655Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"172.25.31.3:2380"}
	{"level":"info","ts":"2025-03-17T13:17:16.591972Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-03-17T13:17:16.598686Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"5591e26a9a7b2927","initial-advertise-peer-urls":["https://172.25.31.3:2380"],"listen-peer-urls":["https://172.25.31.3:2380"],"advertise-client-urls":["https://172.25.31.3:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.25.31.3:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-03-17T13:17:16.599700Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-03-17T13:17:17.585141Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5591e26a9a7b2927 is starting a new election at term 2"}
	{"level":"info","ts":"2025-03-17T13:17:17.585728Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5591e26a9a7b2927 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-03-17T13:17:17.585987Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5591e26a9a7b2927 received MsgPreVoteResp from 5591e26a9a7b2927 at term 2"}
	{"level":"info","ts":"2025-03-17T13:17:17.586296Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5591e26a9a7b2927 became candidate at term 3"}
	{"level":"info","ts":"2025-03-17T13:17:17.586433Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5591e26a9a7b2927 received MsgVoteResp from 5591e26a9a7b2927 at term 3"}
	{"level":"info","ts":"2025-03-17T13:17:17.586643Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5591e26a9a7b2927 became leader at term 3"}
	{"level":"info","ts":"2025-03-17T13:17:17.586795Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 5591e26a9a7b2927 elected leader 5591e26a9a7b2927 at term 3"}
	{"level":"info","ts":"2025-03-17T13:17:17.602506Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"5591e26a9a7b2927","local-member-attributes":"{Name:pause-471400 ClientURLs:[https://172.25.31.3:2379]}","request-path":"/0/members/5591e26a9a7b2927/attributes","cluster-id":"594dfe8495e74bd1","publish-timeout":"7s"}
	{"level":"info","ts":"2025-03-17T13:17:17.603405Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-03-17T13:17:17.604430Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-03-17T13:17:17.611742Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.25.31.3:2379"}
	{"level":"info","ts":"2025-03-17T13:17:17.612246Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-03-17T13:17:17.612830Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-03-17T13:17:17.623815Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-03-17T13:17:17.628168Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-03-17T13:17:17.628400Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 13:18:38 up 10 min,  0 users,  load average: 0.82, 0.70, 0.33
	Linux pause-471400 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [384fe5c06f2d] <==
	I0317 13:17:19.853939       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0317 13:17:19.877532       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0317 13:17:19.877858       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0317 13:17:19.878317       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0317 13:17:19.878917       1 aggregator.go:171] initial CRD sync complete...
	I0317 13:17:19.879177       1 autoregister_controller.go:144] Starting autoregister controller
	I0317 13:17:19.879463       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0317 13:17:19.879676       1 cache.go:39] Caches are synced for autoregister controller
	I0317 13:17:19.899791       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0317 13:17:19.915654       1 shared_informer.go:320] Caches are synced for configmaps
	I0317 13:17:19.915993       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0317 13:17:19.916354       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0317 13:17:19.920202       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0317 13:17:19.923408       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0317 13:17:19.940168       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0317 13:17:20.615620       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0317 13:17:20.735965       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0317 13:17:21.230299       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.25.31.3]
	I0317 13:17:21.231913       1 controller.go:615] quota admission added evaluator for: endpoints
	I0317 13:17:21.247969       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0317 13:17:22.232797       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0317 13:17:22.336650       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0317 13:17:22.412450       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0317 13:17:22.448236       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0317 13:17:23.340983       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-apiserver [5d96f9d335df] <==
	W0317 13:17:04.647526       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:17:04.672443       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:17:04.687878       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:17:04.710901       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:17:04.740911       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:17:04.781912       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:17:04.817857       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:17:04.819414       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:17:04.832338       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:17:04.907780       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:17:04.919812       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:17:04.930836       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:17:04.952629       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:17:04.961532       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:17:04.999510       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:17:05.001029       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:17:05.019544       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:17:05.048122       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:17:05.069779       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:17:05.069814       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:17:05.091638       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:17:05.112756       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:17:05.197283       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:17:05.246887       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:17:05.292253       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [9b55e85d3d12] <==
	I0317 13:17:23.080296       1 shared_informer.go:320] Caches are synced for deployment
	I0317 13:17:23.081584       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0317 13:17:23.084547       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0317 13:17:23.085804       1 shared_informer.go:320] Caches are synced for expand
	I0317 13:17:23.090281       1 shared_informer.go:320] Caches are synced for node
	I0317 13:17:23.090411       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0317 13:17:23.090494       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0317 13:17:23.090502       1 shared_informer.go:320] Caches are synced for HPA
	I0317 13:17:23.090830       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0317 13:17:23.091129       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0317 13:17:23.091364       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-471400"
	I0317 13:17:23.091684       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0317 13:17:23.095407       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0317 13:17:23.097174       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0317 13:17:23.099596       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0317 13:17:23.101921       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0317 13:17:23.117369       1 shared_informer.go:320] Caches are synced for garbage collector
	I0317 13:17:23.117687       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0317 13:17:23.117790       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0317 13:17:23.117393       1 shared_informer.go:320] Caches are synced for garbage collector
	I0317 13:17:23.358357       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="288.537752ms"
	I0317 13:17:23.406361       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="47.830211ms"
	I0317 13:17:23.406654       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="76.9µs"
	I0317 13:17:29.857142       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="38.129714ms"
	I0317 13:17:29.858277       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="1.078095ms"
	
	
	==> kube-controller-manager [a7133843d6ed] <==
	I0317 13:10:29.922135       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-471400"
	I0317 13:10:29.922435       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-471400"
	I0317 13:10:29.924024       1 shared_informer.go:320] Caches are synced for persistent volume
	I0317 13:10:29.926361       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0317 13:10:29.943211       1 shared_informer.go:320] Caches are synced for ephemeral
	I0317 13:10:29.943268       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0317 13:10:29.944165       1 shared_informer.go:320] Caches are synced for resource quota
	I0317 13:10:29.980845       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-471400"
	I0317 13:10:30.162139       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-471400"
	I0317 13:10:31.098735       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="362.465218ms"
	I0317 13:10:31.145521       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="46.665715ms"
	I0317 13:10:31.146629       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="73.902µs"
	I0317 13:10:31.192066       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="158.805µs"
	I0317 13:10:31.268032       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="102.703µs"
	I0317 13:10:31.736967       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="54.465068ms"
	I0317 13:10:31.751111       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="13.608742ms"
	I0317 13:10:31.751271       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="36.001µs"
	I0317 13:10:33.056446       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="63.9µs"
	I0317 13:10:33.089915       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="91.4µs"
	I0317 13:10:33.108387       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="63µs"
	I0317 13:10:33.113501       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="113.8µs"
	I0317 13:10:35.939212       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-471400"
	I0317 13:11:10.093929       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="18.323371ms"
	I0317 13:11:10.094245       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="135.8µs"
	I0317 13:15:11.002912       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-471400"
	
	
	==> kube-proxy [bec7db06f9e9] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0317 13:10:33.466097       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0317 13:10:33.507364       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["172.25.31.3"]
	E0317 13:10:33.508007       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0317 13:10:33.571022       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0317 13:10:33.571126       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0317 13:10:33.571156       1 server_linux.go:170] "Using iptables Proxier"
	I0317 13:10:33.575685       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0317 13:10:33.579156       1 server.go:497] "Version info" version="v1.32.2"
	I0317 13:10:33.579485       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0317 13:10:33.587194       1 config.go:105] "Starting endpoint slice config controller"
	I0317 13:10:33.588568       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0317 13:10:33.588619       1 config.go:199] "Starting service config controller"
	I0317 13:10:33.588627       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0317 13:10:33.591465       1 config.go:329] "Starting node config controller"
	I0317 13:10:33.591991       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0317 13:10:33.689240       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0317 13:10:33.689161       1 shared_informer.go:320] Caches are synced for service config
	I0317 13:10:33.692206       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [ed4022dcab64] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0317 13:17:22.059127       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0317 13:17:22.092147       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["172.25.31.3"]
	E0317 13:17:22.092370       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0317 13:17:22.192588       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0317 13:17:22.192889       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0317 13:17:22.199226       1 server_linux.go:170] "Using iptables Proxier"
	I0317 13:17:22.208333       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0317 13:17:22.208871       1 server.go:497] "Version info" version="v1.32.2"
	I0317 13:17:22.209593       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0317 13:17:22.223356       1 config.go:199] "Starting service config controller"
	I0317 13:17:22.226798       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0317 13:17:22.228512       1 config.go:105] "Starting endpoint slice config controller"
	I0317 13:17:22.228601       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0317 13:17:22.231568       1 config.go:329] "Starting node config controller"
	I0317 13:17:22.231923       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0317 13:17:22.327371       1 shared_informer.go:320] Caches are synced for service config
	I0317 13:17:22.329054       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0317 13:17:22.332838       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [48490bf5143c] <==
	W0317 13:10:23.590405       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0317 13:10:23.590523       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0317 13:10:23.610060       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0317 13:10:23.610153       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 13:10:23.738688       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0317 13:10:23.740693       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 13:10:23.837985       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0317 13:10:23.838097       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0317 13:10:23.850631       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0317 13:10:23.851070       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0317 13:10:23.851554       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0317 13:10:23.852226       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0317 13:10:23.932768       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0317 13:10:23.933225       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0317 13:10:23.981700       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0317 13:10:23.981821       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0317 13:10:23.995757       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0317 13:10:23.996212       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0317 13:10:24.016153       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0317 13:10:24.016320       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0317 13:10:26.801476       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0317 13:16:55.425674       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0317 13:16:55.425708       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0317 13:16:55.426021       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0317 13:16:55.422034       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [c57a64a068ec] <==
	I0317 13:17:17.605641       1 serving.go:386] Generated self-signed cert in-memory
	W0317 13:17:19.772491       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0317 13:17:19.772549       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0317 13:17:19.772562       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0317 13:17:19.772572       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0317 13:17:19.846337       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.2"
	I0317 13:17:19.849158       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0317 13:17:19.855004       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0317 13:17:19.858217       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0317 13:17:19.858302       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0317 13:17:19.859333       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0317 13:17:19.959694       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 17 13:17:18 pause-471400 kubelet[5348]: E0317 13:17:18.998849    5348 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-471400\" not found" node="pause-471400"
	Mar 17 13:17:19 pause-471400 kubelet[5348]: E0317 13:17:19.000448    5348 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-471400\" not found" node="pause-471400"
	Mar 17 13:17:19 pause-471400 kubelet[5348]: E0317 13:17:19.000688    5348 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-471400\" not found" node="pause-471400"
	Mar 17 13:17:19 pause-471400 kubelet[5348]: I0317 13:17:19.928847    5348 kubelet_node_status.go:125] "Node was previously registered" node="pause-471400"
	Mar 17 13:17:19 pause-471400 kubelet[5348]: I0317 13:17:19.928966    5348 kubelet_node_status.go:79] "Successfully registered node" node="pause-471400"
	Mar 17 13:17:19 pause-471400 kubelet[5348]: I0317 13:17:19.929002    5348 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Mar 17 13:17:19 pause-471400 kubelet[5348]: I0317 13:17:19.929917    5348 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Mar 17 13:17:19 pause-471400 kubelet[5348]: I0317 13:17:19.944739    5348 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-471400"
	Mar 17 13:17:19 pause-471400 kubelet[5348]: E0317 13:17:19.966658    5348 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-471400\" already exists" pod="kube-system/kube-scheduler-pause-471400"
	Mar 17 13:17:19 pause-471400 kubelet[5348]: I0317 13:17:19.966707    5348 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-471400"
	Mar 17 13:17:19 pause-471400 kubelet[5348]: E0317 13:17:19.980976    5348 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-pause-471400\" already exists" pod="kube-system/etcd-pause-471400"
	Mar 17 13:17:19 pause-471400 kubelet[5348]: I0317 13:17:19.981115    5348 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-471400"
	Mar 17 13:17:19 pause-471400 kubelet[5348]: E0317 13:17:19.994215    5348 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-471400\" already exists" pod="kube-system/kube-apiserver-pause-471400"
	Mar 17 13:17:19 pause-471400 kubelet[5348]: I0317 13:17:19.994259    5348 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-471400"
	Mar 17 13:17:20 pause-471400 kubelet[5348]: E0317 13:17:20.030676    5348 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-471400\" already exists" pod="kube-system/kube-controller-manager-pause-471400"
	Mar 17 13:17:20 pause-471400 kubelet[5348]: I0317 13:17:20.529396    5348 apiserver.go:52] "Watching apiserver"
	Mar 17 13:17:20 pause-471400 kubelet[5348]: I0317 13:17:20.546946    5348 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Mar 17 13:17:20 pause-471400 kubelet[5348]: I0317 13:17:20.601451    5348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2be3017-491d-427e-982e-7fcdf387b94a-lib-modules\") pod \"kube-proxy-2w5n2\" (UID: \"d2be3017-491d-427e-982e-7fcdf387b94a\") " pod="kube-system/kube-proxy-2w5n2"
	Mar 17 13:17:20 pause-471400 kubelet[5348]: I0317 13:17:20.601864    5348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d2be3017-491d-427e-982e-7fcdf387b94a-xtables-lock\") pod \"kube-proxy-2w5n2\" (UID: \"d2be3017-491d-427e-982e-7fcdf387b94a\") " pod="kube-system/kube-proxy-2w5n2"
	Mar 17 13:17:24 pause-471400 kubelet[5348]: I0317 13:17:24.247732    5348 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Mar 17 13:17:29 pause-471400 kubelet[5348]: I0317 13:17:29.791896    5348 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Mar 17 13:17:46 pause-471400 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Mar 17 13:17:46 pause-471400 systemd[1]: kubelet.service: Deactivated successfully.
	Mar 17 13:17:46 pause-471400 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Mar 17 13:17:46 pause-471400 systemd[1]: kubelet.service: Consumed 1.585s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-471400 -n pause-471400
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-471400 -n pause-471400: exit status 2 (14.1952447s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "pause-471400" apiserver is not running, skipping kubectl commands (state="Paused")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-471400 -n pause-471400
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-471400 -n pause-471400: exit status 2 (14.0926233s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestPause/serial/Unpause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/Unpause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p pause-471400 logs -n 25
E0317 13:19:15.852491    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-758100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p pause-471400 logs -n 25: (20.5607372s)
helpers_test.go:252: TestPause/serial/Unpause logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | -p cilium-841900 sudo crio     | cilium-841900             | minikube6\jenkins | v1.35.0 | 17 Mar 25 12:55 UTC |                     |
	|         | config                         |                           |                   |         |                     |                     |
	| delete  | -p cilium-841900               | cilium-841900             | minikube6\jenkins | v1.35.0 | 17 Mar 25 12:55 UTC | 17 Mar 25 12:55 UTC |
	| start   | -p force-systemd-env-265000    | force-systemd-env-265000  | minikube6\jenkins | v1.35.0 | 17 Mar 25 12:55 UTC | 17 Mar 25 13:02 UTC |
	|         | --memory=2048                  |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=5         |                           |                   |         |                     |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| delete  | -p NoKubernetes-183300         | NoKubernetes-183300       | minikube6\jenkins | v1.35.0 | 17 Mar 25 12:56 UTC | 17 Mar 25 12:56 UTC |
	| start   | -p kubernetes-upgrade-816300   | kubernetes-upgrade-816300 | minikube6\jenkins | v1.35.0 | 17 Mar 25 12:56 UTC | 17 Mar 25 13:04 UTC |
	|         | --memory=2200                  |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |                   |         |                     |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| delete  | -p offline-docker-183300       | offline-docker-183300     | minikube6\jenkins | v1.35.0 | 17 Mar 25 12:57 UTC | 17 Mar 25 12:58 UTC |
	| start   | -p stopped-upgrade-112300      | minikube                  | minikube6\jenkins | v1.26.0 | 17 Mar 25 12:58 GMT | 17 Mar 25 13:07 GMT |
	|         | --memory=2200                  |                           |                   |         |                     |                     |
	|         | --vm-driver=hyperv             |                           |                   |         |                     |                     |
	| start   | -p running-upgrade-374500      | running-upgrade-374500    | minikube6\jenkins | v1.35.0 | 17 Mar 25 12:59 UTC | 17 Mar 25 13:08 UTC |
	|         | --memory=2200                  |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |                   |         |                     |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| ssh     | force-systemd-env-265000       | force-systemd-env-265000  | minikube6\jenkins | v1.35.0 | 17 Mar 25 13:02 UTC | 17 Mar 25 13:02 UTC |
	|         | ssh docker info --format       |                           |                   |         |                     |                     |
	|         | {{.CgroupDriver}}              |                           |                   |         |                     |                     |
	| delete  | -p force-systemd-env-265000    | force-systemd-env-265000  | minikube6\jenkins | v1.35.0 | 17 Mar 25 13:02 UTC | 17 Mar 25 13:03 UTC |
	| start   | -p pause-471400 --memory=2048  | pause-471400              | minikube6\jenkins | v1.35.0 | 17 Mar 25 13:03 UTC | 17 Mar 25 13:11 UTC |
	|         | --install-addons=false         |                           |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv     |                           |                   |         |                     |                     |
	| stop    | -p kubernetes-upgrade-816300   | kubernetes-upgrade-816300 | minikube6\jenkins | v1.35.0 | 17 Mar 25 13:04 UTC | 17 Mar 25 13:05 UTC |
	| start   | -p kubernetes-upgrade-816300   | kubernetes-upgrade-816300 | minikube6\jenkins | v1.35.0 | 17 Mar 25 13:05 UTC | 17 Mar 25 13:12 UTC |
	|         | --memory=2200                  |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.32.2   |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |                   |         |                     |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| stop    | stopped-upgrade-112300 stop    | minikube                  | minikube6\jenkins | v1.26.0 | 17 Mar 25 13:07 GMT | 17 Mar 25 13:07 GMT |
	| start   | -p stopped-upgrade-112300      | stopped-upgrade-112300    | minikube6\jenkins | v1.35.0 | 17 Mar 25 13:07 UTC | 17 Mar 25 13:14 UTC |
	|         | --memory=2200                  |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |                   |         |                     |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| delete  | -p running-upgrade-374500      | running-upgrade-374500    | minikube6\jenkins | v1.35.0 | 17 Mar 25 13:08 UTC | 17 Mar 25 13:09 UTC |
	| start   | -p cert-expiration-735200      | cert-expiration-735200    | minikube6\jenkins | v1.35.0 | 17 Mar 25 13:09 UTC | 17 Mar 25 13:16 UTC |
	|         | --memory=2048                  |                           |                   |         |                     |                     |
	|         | --cert-expiration=3m           |                           |                   |         |                     |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| start   | -p pause-471400                | pause-471400              | minikube6\jenkins | v1.35.0 | 17 Mar 25 13:11 UTC | 17 Mar 25 13:17 UTC |
	|         | --alsologtostderr -v=1         |                           |                   |         |                     |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| start   | -p kubernetes-upgrade-816300   | kubernetes-upgrade-816300 | minikube6\jenkins | v1.35.0 | 17 Mar 25 13:12 UTC |                     |
	|         | --memory=2200                  |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                           |                   |         |                     |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| start   | -p kubernetes-upgrade-816300   | kubernetes-upgrade-816300 | minikube6\jenkins | v1.35.0 | 17 Mar 25 13:12 UTC | 17 Mar 25 13:18 UTC |
	|         | --memory=2200                  |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.32.2   |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |                   |         |                     |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| delete  | -p stopped-upgrade-112300      | stopped-upgrade-112300    | minikube6\jenkins | v1.35.0 | 17 Mar 25 13:14 UTC | 17 Mar 25 13:15 UTC |
	| start   | -p docker-flags-664100         | docker-flags-664100       | minikube6\jenkins | v1.35.0 | 17 Mar 25 13:15 UTC |                     |
	|         | --cache-images=false           |                           |                   |         |                     |                     |
	|         | --memory=2048                  |                           |                   |         |                     |                     |
	|         | --install-addons=false         |                           |                   |         |                     |                     |
	|         | --wait=false                   |                           |                   |         |                     |                     |
	|         | --docker-env=FOO=BAR           |                           |                   |         |                     |                     |
	|         | --docker-env=BAZ=BAT           |                           |                   |         |                     |                     |
	|         | --docker-opt=debug             |                           |                   |         |                     |                     |
	|         | --docker-opt=icc=true          |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=5         |                           |                   |         |                     |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| pause   | -p pause-471400                | pause-471400              | minikube6\jenkins | v1.35.0 | 17 Mar 25 13:17 UTC | 17 Mar 25 13:17 UTC |
	|         | --alsologtostderr -v=5         |                           |                   |         |                     |                     |
	| unpause | -p pause-471400                | pause-471400              | minikube6\jenkins | v1.35.0 | 17 Mar 25 13:18 UTC |                     |
	|         | --alsologtostderr -v=5         |                           |                   |         |                     |                     |
	| delete  | -p kubernetes-upgrade-816300   | kubernetes-upgrade-816300 | minikube6\jenkins | v1.35.0 | 17 Mar 25 13:18 UTC |                     |
	|---------|--------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/03/17 13:15:13
	Running on machine: minikube6
	Binary: Built with gc go1.24.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0317 13:15:13.663151    7220 out.go:345] Setting OutFile to fd 1536 ...
	I0317 13:15:13.749602    7220 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:15:13.749602    7220 out.go:358] Setting ErrFile to fd 1652...
	I0317 13:15:13.749602    7220 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:15:13.773383    7220 out.go:352] Setting JSON to false
	I0317 13:15:13.776727    7220 start.go:129] hostinfo: {"hostname":"minikube6","uptime":11090,"bootTime":1742206223,"procs":193,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5608 Build 19045.5608","kernelVersion":"10.0.19045.5608 Build 19045.5608","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0317 13:15:13.776727    7220 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0317 13:15:13.787047    7220 out.go:177] * [docker-flags-664100] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5608 Build 19045.5608
	I0317 13:15:13.793247    7220 notify.go:220] Checking for updates...
	I0317 13:15:13.794481    7220 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0317 13:15:13.797479    7220 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0317 13:15:13.800167    7220 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0317 13:15:13.803213    7220 out.go:177]   - MINIKUBE_LOCATION=20535
	I0317 13:15:13.805414    7220 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0317 13:15:14.126619   10084 main.go:141] libmachine: [stdout =====>] : 172.25.26.33
	
	I0317 13:15:14.127674   10084 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:15:14.133802   10084 main.go:141] libmachine: Using SSH client type: native
	I0317 13:15:14.133957   10084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.26.33 22 <nil> <nil>}
	I0317 13:15:14.133957   10084 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0317 13:15:16.552934   10084 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0317 13:15:16.552934   10084 machine.go:96] duration metric: took 48.9495166s to provisionDockerMachine
	I0317 13:15:16.552934   10084 client.go:171] duration metric: took 2m8.1588137s to LocalClient.Create
	I0317 13:15:16.553011   10084 start.go:167] duration metric: took 2m8.1590376s to libmachine.API.Create "cert-expiration-735200"
	I0317 13:15:16.553011   10084 start.go:293] postStartSetup for "cert-expiration-735200" (driver="hyperv")
	I0317 13:15:16.553011   10084 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0317 13:15:16.570328   10084 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0317 13:15:16.570328   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-735200 ).state
	I0317 13:15:13.809618    7220 config.go:182] Loaded profile config "cert-expiration-735200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0317 13:15:13.810088    7220 config.go:182] Loaded profile config "ha-450500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0317 13:15:13.810785    7220 config.go:182] Loaded profile config "kubernetes-upgrade-816300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0317 13:15:13.811208    7220 config.go:182] Loaded profile config "pause-471400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0317 13:15:13.811208    7220 driver.go:394] Setting default libvirt URI to qemu:///system
	I0317 13:15:19.518826    7220 out.go:177] * Using the hyperv driver based on user configuration
	I0317 13:15:19.522716    7220 start.go:297] selected driver: hyperv
	I0317 13:15:19.522716    7220 start.go:901] validating driver "hyperv" against <nil>
	I0317 13:15:19.522716    7220 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0317 13:15:19.578528    7220 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0317 13:15:19.579283    7220 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0317 13:15:19.580341    7220 cni.go:84] Creating CNI manager for ""
	I0317 13:15:19.580341    7220 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0317 13:15:19.580341    7220 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0317 13:15:19.580341    7220 start.go:340] cluster config:
	{Name:docker-flags-664100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:docker-flags-664100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 13:15:19.580341    7220 iso.go:125] acquiring lock: {Name:mk49322cc4182124f5e9cd1631076166a7ff463d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0317 13:15:19.587799    7220 out.go:177] * Starting "docker-flags-664100" primary control-plane node in "docker-flags-664100" cluster
	I0317 13:15:19.443094   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:15:19.443094   10084 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:15:19.444097   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-735200 ).networkadapters[0]).ipaddresses[0]
	I0317 13:15:19.590655    7220 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0317 13:15:19.590879    7220 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4
	I0317 13:15:19.590935    7220 cache.go:56] Caching tarball of preloaded images
	I0317 13:15:19.591271    7220 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0317 13:15:19.591524    7220 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on docker
	I0317 13:15:19.591848    7220 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\docker-flags-664100\config.json ...
	I0317 13:15:19.592309    7220 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\docker-flags-664100\config.json: {Name:mk8f96dcd7109b2db4c71e9e8573ce48dccde009 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:15:19.594098    7220 start.go:360] acquireMachinesLock for docker-flags-664100: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0317 13:15:22.704276   10084 main.go:141] libmachine: [stdout =====>] : 172.25.26.33
	
	I0317 13:15:22.704276   10084 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:15:22.704276   10084 sshutil.go:53] new ssh client: &{IP:172.25.26.33 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\cert-expiration-735200\id_rsa Username:docker}
	I0317 13:15:22.822253   10084 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (6.2518556s)
	I0317 13:15:22.833211   10084 ssh_runner.go:195] Run: cat /etc/os-release
	I0317 13:15:22.841540   10084 info.go:137] Remote host: Buildroot 2023.02.9
	I0317 13:15:22.841540   10084 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0317 13:15:22.842144   10084 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0317 13:15:22.843104   10084 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem -> 89402.pem in /etc/ssl/certs
	I0317 13:15:22.856913   10084 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0317 13:15:22.879477   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem --> /etc/ssl/certs/89402.pem (1708 bytes)
	I0317 13:15:22.928697   10084 start.go:296] duration metric: took 6.3756152s for postStartSetup
	I0317 13:15:22.931199   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-735200 ).state
	I0317 13:15:25.176095   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:15:25.176095   10084 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:15:25.176244   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-735200 ).networkadapters[0]).ipaddresses[0]
	I0317 13:15:27.790826   10084 main.go:141] libmachine: [stdout =====>] : 172.25.26.33
	
	I0317 13:15:27.790826   10084 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:15:27.791238   10084 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\cert-expiration-735200\config.json ...
	I0317 13:15:27.794633   10084 start.go:128] duration metric: took 2m19.4051263s to createHost
	I0317 13:15:27.794710   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-735200 ).state
	I0317 13:15:29.958882   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:15:29.959504   10084 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:15:29.959589   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-735200 ).networkadapters[0]).ipaddresses[0]
	I0317 13:15:32.572165   10084 main.go:141] libmachine: [stdout =====>] : 172.25.26.33
	
	I0317 13:15:32.572165   10084 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:15:32.577161   10084 main.go:141] libmachine: Using SSH client type: native
	I0317 13:15:32.577161   10084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.26.33 22 <nil> <nil>}
	I0317 13:15:32.577161   10084 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0317 13:15:32.715225   10084 main.go:141] libmachine: SSH cmd err, output: <nil>: 1742217332.739100495
	
	I0317 13:15:32.715225   10084 fix.go:216] guest clock: 1742217332.739100495
	I0317 13:15:32.715300   10084 fix.go:229] Guest: 2025-03-17 13:15:32.739100495 +0000 UTC Remote: 2025-03-17 13:15:27.7946334 +0000 UTC m=+340.785027101 (delta=4.944467095s)
	I0317 13:15:32.715300   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-735200 ).state
	I0317 13:15:34.862568   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:15:34.863564   10084 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:15:34.863564   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-735200 ).networkadapters[0]).ipaddresses[0]
	I0317 13:15:37.678778    2276 start.go:364] duration metric: took 4m19.9205964s to acquireMachinesLock for "pause-471400"
	I0317 13:15:37.679677    2276 start.go:96] Skipping create...Using existing machine configuration
	I0317 13:15:37.679677    2276 fix.go:54] fixHost starting: 
	I0317 13:15:37.680453    2276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-471400 ).state
	I0317 13:15:40.023699    2276 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:15:40.023699    2276 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:15:40.023699    2276 fix.go:112] recreateIfNeeded on pause-471400: state=Running err=<nil>
	W0317 13:15:40.023699    2276 fix.go:138] unexpected machine state, will restart: <nil>
	I0317 13:15:40.030166    2276 out.go:177] * Updating the running hyperv "pause-471400" VM ...
	I0317 13:15:40.032400    2276 machine.go:93] provisionDockerMachine start ...
	I0317 13:15:40.032400    2276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-471400 ).state
	I0317 13:15:37.515882   10084 main.go:141] libmachine: [stdout =====>] : 172.25.26.33
	
	I0317 13:15:37.515992   10084 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:15:37.520574   10084 main.go:141] libmachine: Using SSH client type: native
	I0317 13:15:37.521404   10084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.26.33 22 <nil> <nil>}
	I0317 13:15:37.521404   10084 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1742217332
	I0317 13:15:37.678778   10084 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Mar 17 13:15:32 UTC 2025
	
	I0317 13:15:37.678778   10084 fix.go:236] clock set: Mon Mar 17 13:15:32 UTC 2025
	 (err=<nil>)
	I0317 13:15:37.678778   10084 start.go:83] releasing machines lock for "cert-expiration-735200", held for 2m29.2895158s
	I0317 13:15:37.678778   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-735200 ).state
	I0317 13:15:40.004837   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:15:40.004837   10084 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:15:40.005539   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-735200 ).networkadapters[0]).ipaddresses[0]
	I0317 13:15:42.333417    2276 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:15:42.333417    2276 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:15:42.333417    2276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-471400 ).networkadapters[0]).ipaddresses[0]
	I0317 13:15:45.093520    2276 main.go:141] libmachine: [stdout =====>] : 172.25.31.3
	
	I0317 13:15:45.093520    2276 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:15:45.100127    2276 main.go:141] libmachine: Using SSH client type: native
	I0317 13:15:45.100583    2276 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.31.3 22 <nil> <nil>}
	I0317 13:15:45.100583    2276 main.go:141] libmachine: About to run SSH command:
	hostname
	I0317 13:15:45.245898    2276 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-471400
	
	I0317 13:15:45.245898    2276 buildroot.go:166] provisioning hostname "pause-471400"
	I0317 13:15:45.246037    2276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-471400 ).state
	I0317 13:15:42.761089   10084 main.go:141] libmachine: [stdout =====>] : 172.25.26.33
	
	I0317 13:15:42.761089   10084 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:15:42.769029   10084 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0317 13:15:42.769029   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-735200 ).state
	I0317 13:15:42.781633   10084 ssh_runner.go:195] Run: cat /version.json
	I0317 13:15:42.781633   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-735200 ).state
	I0317 13:15:45.131894   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:15:45.131894   10084 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:15:45.131894   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:15:45.131894   10084 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:15:45.131894   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-735200 ).networkadapters[0]).ipaddresses[0]
	I0317 13:15:45.131894   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-735200 ).networkadapters[0]).ipaddresses[0]
	I0317 13:15:47.643913    2276 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:15:47.644126    2276 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:15:47.644225    2276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-471400 ).networkadapters[0]).ipaddresses[0]
	I0317 13:15:50.330442    2276 main.go:141] libmachine: [stdout =====>] : 172.25.31.3
	
	I0317 13:15:50.330442    2276 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:15:50.342292    2276 main.go:141] libmachine: Using SSH client type: native
	I0317 13:15:50.343069    2276 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.31.3 22 <nil> <nil>}
	I0317 13:15:50.343069    2276 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-471400 && echo "pause-471400" | sudo tee /etc/hostname
	I0317 13:15:50.518938    2276 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-471400
	
	I0317 13:15:50.518971    2276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-471400 ).state
	I0317 13:15:48.025192   10084 main.go:141] libmachine: [stdout =====>] : 172.25.26.33
	
	I0317 13:15:48.025192   10084 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:15:48.025408   10084 sshutil.go:53] new ssh client: &{IP:172.25.26.33 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\cert-expiration-735200\id_rsa Username:docker}
	I0317 13:15:48.051725   10084 main.go:141] libmachine: [stdout =====>] : 172.25.26.33
	
	I0317 13:15:48.051725   10084 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:15:48.052187   10084 sshutil.go:53] new ssh client: &{IP:172.25.26.33 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\cert-expiration-735200\id_rsa Username:docker}
	I0317 13:15:48.121601   10084 ssh_runner.go:235] Completed: cat /version.json: (5.3399087s)
	I0317 13:15:48.133722   10084 ssh_runner.go:195] Run: systemctl --version
	I0317 13:15:48.139761   10084 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.3705678s)
	W0317 13:15:48.139761   10084 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0317 13:15:48.158648   10084 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0317 13:15:48.169279   10084 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0317 13:15:48.182383   10084 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0317 13:15:48.215174   10084 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0317 13:15:48.215327   10084 start.go:495] detecting cgroup driver to use...
	I0317 13:15:48.215392   10084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0317 13:15:48.251994   10084 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0317 13:15:48.251994   10084 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0317 13:15:48.268981   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0317 13:15:48.302775   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0317 13:15:48.325486   10084 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0317 13:15:48.338040   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0317 13:15:48.370634   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0317 13:15:48.401453   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0317 13:15:48.431424   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0317 13:15:48.463628   10084 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0317 13:15:48.495288   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0317 13:15:48.525755   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0317 13:15:48.557328   10084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0317 13:15:48.587885   10084 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0317 13:15:48.606039   10084 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0317 13:15:48.617767   10084 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0317 13:15:48.653186   10084 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0317 13:15:48.684979   10084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:15:48.900040   10084 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0317 13:15:48.932913   10084 start.go:495] detecting cgroup driver to use...
	I0317 13:15:48.943837   10084 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0317 13:15:48.982219   10084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0317 13:15:49.017237   10084 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0317 13:15:49.064607   10084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0317 13:15:49.103129   10084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0317 13:15:49.140710   10084 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0317 13:15:49.204040   10084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0317 13:15:49.237448   10084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0317 13:15:49.293424   10084 ssh_runner.go:195] Run: which cri-dockerd
	I0317 13:15:49.313300   10084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0317 13:15:49.332960   10084 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0317 13:15:49.375679   10084 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0317 13:15:49.580406   10084 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0317 13:15:49.776163   10084 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0317 13:15:49.776531   10084 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0317 13:15:49.822471   10084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:15:50.030184   10084 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0317 13:15:52.659946   10084 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6296866s)
	I0317 13:15:52.673279   10084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0317 13:15:52.709044   10084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0317 13:15:52.749322   10084 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0317 13:15:52.968283   10084 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0317 13:15:53.191229   10084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:15:53.414323   10084 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0317 13:15:53.459805   10084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0317 13:15:53.496759   10084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:15:53.704311   10084 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0317 13:15:53.839399   10084 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0317 13:15:53.853577   10084 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0317 13:15:53.863927   10084 start.go:563] Will wait 60s for crictl version
	I0317 13:15:53.876591   10084 ssh_runner.go:195] Run: which crictl
	I0317 13:15:53.894838   10084 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0317 13:15:53.950431   10084 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0317 13:15:53.960321   10084 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0317 13:15:54.007178   10084 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0317 13:15:52.738920    2276 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:15:52.739038    2276 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:15:52.739038    2276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-471400 ).networkadapters[0]).ipaddresses[0]
	I0317 13:15:55.567523    2276 main.go:141] libmachine: [stdout =====>] : 172.25.31.3
	
	I0317 13:15:55.567523    2276 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:15:55.572503    2276 main.go:141] libmachine: Using SSH client type: native
	I0317 13:15:55.572503    2276 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.31.3 22 <nil> <nil>}
	I0317 13:15:55.572503    2276 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-471400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-471400/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-471400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0317 13:15:55.723578    2276 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0317 13:15:55.723578    2276 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0317 13:15:55.723578    2276 buildroot.go:174] setting up certificates
	I0317 13:15:55.723578    2276 provision.go:84] configureAuth start
	I0317 13:15:55.724568    2276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-471400 ).state
	I0317 13:15:54.049889   10084 out.go:235] * Preparing Kubernetes v1.32.2 on Docker 27.4.0 ...
	I0317 13:15:54.049998   10084 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0317 13:15:54.054184   10084 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0317 13:15:54.054184   10084 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0317 13:15:54.054184   10084 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0317 13:15:54.054184   10084 ip.go:211] Found interface: {Index:13 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:4b:84:d5 Flags:up|broadcast|multicast|running}
	I0317 13:15:54.057448   10084 ip.go:214] interface addr: fe80::f0c7:c31c:6237:ef35/64
	I0317 13:15:54.057448   10084 ip.go:214] interface addr: 172.25.16.1/20
	I0317 13:15:54.068551   10084 ssh_runner.go:195] Run: grep 172.25.16.1	host.minikube.internal$ /etc/hosts
	I0317 13:15:54.074601   10084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.25.16.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 13:15:54.098462   10084 kubeadm.go:883] updating cluster {Name:cert-expiration-735200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:cert-
expiration-735200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.26.33 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0317 13:15:54.098462   10084 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0317 13:15:54.109241   10084 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0317 13:15:54.136062   10084 docker.go:689] Got preloaded images: 
	I0317 13:15:54.136098   10084 docker.go:695] registry.k8s.io/kube-apiserver:v1.32.2 wasn't preloaded
	I0317 13:15:54.150007   10084 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0317 13:15:54.181579   10084 ssh_runner.go:195] Run: which lz4
	I0317 13:15:54.201164   10084 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0317 13:15:54.208859   10084 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0317 13:15:54.208859   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (349803115 bytes)
	I0317 13:15:56.339331   10084 docker.go:653] duration metric: took 2.1505782s to copy over tarball
	I0317 13:15:56.352800   10084 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0317 13:15:58.145545    2276 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:15:58.145545    2276 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:15:58.145846    2276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-471400 ).networkadapters[0]).ipaddresses[0]
	I0317 13:16:00.894292    2276 main.go:141] libmachine: [stdout =====>] : 172.25.31.3
	
	I0317 13:16:00.894292    2276 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:00.894292    2276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-471400 ).state
	I0317 13:16:03.132372    2276 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:16:03.132372    2276 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:03.132372    2276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-471400 ).networkadapters[0]).ipaddresses[0]
	I0317 13:16:05.846976    2276 main.go:141] libmachine: [stdout =====>] : 172.25.31.3
	
	I0317 13:16:05.846976    2276 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:05.846976    2276 provision.go:143] copyHostCerts
	I0317 13:16:05.846976    2276 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0317 13:16:05.846976    2276 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0317 13:16:05.846976    2276 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0317 13:16:05.856382    2276 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0317 13:16:05.856382    2276 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0317 13:16:05.856840    2276 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0317 13:16:05.857922    2276 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0317 13:16:05.857922    2276 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0317 13:16:05.857922    2276 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0317 13:16:05.859397    2276 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.pause-471400 san=[127.0.0.1 172.25.31.3 localhost minikube pause-471400]
	I0317 13:16:05.938970    2276 provision.go:177] copyRemoteCerts
	I0317 13:16:05.950346    2276 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0317 13:16:05.950346    2276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-471400 ).state
	I0317 13:16:05.014446   10084 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.661517s)
	I0317 13:16:05.014446   10084 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0317 13:16:05.103775   10084 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0317 13:16:05.124951   10084 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0317 13:16:05.175005   10084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:16:05.395064   10084 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0317 13:16:08.212308    2276 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:16:08.213540    2276 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:08.213680    2276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-471400 ).networkadapters[0]).ipaddresses[0]
	I0317 13:16:10.890015    2276 main.go:141] libmachine: [stdout =====>] : 172.25.31.3
	
	I0317 13:16:10.890015    2276 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:10.890570    2276 sshutil.go:53] new ssh client: &{IP:172.25.31.3 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\pause-471400\id_rsa Username:docker}
	I0317 13:16:11.006174    2276 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0557713s)
	I0317 13:16:11.007173    2276 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0317 13:16:11.071340    2276 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0317 13:16:11.127593    2276 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0317 13:16:11.182101    2276 provision.go:87] duration metric: took 15.4583505s to configureAuth
	I0317 13:16:11.182158    2276 buildroot.go:189] setting minikube options for container-runtime
	I0317 13:16:11.182766    2276 config.go:182] Loaded profile config "pause-471400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0317 13:16:11.182818    2276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-471400 ).state
	I0317 13:16:08.591003   10084 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.1958373s)
	I0317 13:16:08.603788   10084 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0317 13:16:08.643603   10084 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.32.2
	registry.k8s.io/kube-controller-manager:v1.32.2
	registry.k8s.io/kube-scheduler:v1.32.2
	registry.k8s.io/kube-proxy:v1.32.2
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0317 13:16:08.643603   10084 cache_images.go:84] Images are preloaded, skipping loading
	I0317 13:16:08.644197   10084 kubeadm.go:934] updating node { 172.25.26.33 8443 v1.32.2 docker true true} ...
	I0317 13:16:08.644305   10084 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=cert-expiration-735200 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.26.33
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:cert-expiration-735200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0317 13:16:08.654995   10084 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0317 13:16:08.735924   10084 cni.go:84] Creating CNI manager for ""
	I0317 13:16:08.736123   10084 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0317 13:16:08.736123   10084 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0317 13:16:08.736188   10084 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.25.26.33 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-735200 NodeName:cert-expiration-735200 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.25.26.33"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.25.26.33 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0317 13:16:08.736362   10084 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.25.26.33
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "cert-expiration-735200"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "172.25.26.33"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.25.26.33"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0317 13:16:08.748823   10084 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0317 13:16:08.768149   10084 binaries.go:44] Found k8s binaries, skipping transfer
	I0317 13:16:08.781501   10084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0317 13:16:08.801938   10084 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0317 13:16:08.844323   10084 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0317 13:16:08.879895   10084 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2300 bytes)
	I0317 13:16:08.924852   10084 ssh_runner.go:195] Run: grep 172.25.26.33	control-plane.minikube.internal$ /etc/hosts
	I0317 13:16:08.931934   10084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.25.26.33	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 13:16:08.967134   10084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:16:09.173356   10084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 13:16:09.208512   10084 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\cert-expiration-735200 for IP: 172.25.26.33
	I0317 13:16:09.208555   10084 certs.go:194] generating shared ca certs ...
	I0317 13:16:09.208555   10084 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:16:09.209647   10084 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0317 13:16:09.209994   10084 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0317 13:16:09.210232   10084 certs.go:256] generating profile certs ...
	I0317 13:16:09.210898   10084 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\cert-expiration-735200\client.key
	I0317 13:16:09.211013   10084 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\cert-expiration-735200\client.crt with IP's: []
	I0317 13:16:09.331844   10084 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\cert-expiration-735200\client.crt ...
	I0317 13:16:09.331844   10084 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\cert-expiration-735200\client.crt: {Name:mkfc95eec9a09c287b456d437f306c7394253466 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:16:09.333840   10084 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\cert-expiration-735200\client.key ...
	I0317 13:16:09.333840   10084 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\cert-expiration-735200\client.key: {Name:mk2681d3afe0b88cde2b3c3018a78070247e4809 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:16:09.334885   10084 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\cert-expiration-735200\apiserver.key.0e4993fe
	I0317 13:16:09.335852   10084 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\cert-expiration-735200\apiserver.crt.0e4993fe with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.25.26.33]
	I0317 13:16:09.820277   10084 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\cert-expiration-735200\apiserver.crt.0e4993fe ...
	I0317 13:16:09.820277   10084 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\cert-expiration-735200\apiserver.crt.0e4993fe: {Name:mkbb592f7beb6bef58a4fcc965da6636e600e7bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:16:09.821329   10084 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\cert-expiration-735200\apiserver.key.0e4993fe ...
	I0317 13:16:09.821329   10084 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\cert-expiration-735200\apiserver.key.0e4993fe: {Name:mkd1fd758c7a2ea97e380883f6fda251dc135c4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:16:09.822297   10084 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\cert-expiration-735200\apiserver.crt.0e4993fe -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\cert-expiration-735200\apiserver.crt
	I0317 13:16:09.837307   10084 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\cert-expiration-735200\apiserver.key.0e4993fe -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\cert-expiration-735200\apiserver.key
	I0317 13:16:09.838311   10084 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\cert-expiration-735200\proxy-client.key
	I0317 13:16:09.838311   10084 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\cert-expiration-735200\proxy-client.crt with IP's: []
	I0317 13:16:10.130052   10084 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\cert-expiration-735200\proxy-client.crt ...
	I0317 13:16:10.130052   10084 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\cert-expiration-735200\proxy-client.crt: {Name:mk195c67a2cbece6211ae24cd3c4b34154ce48a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:16:10.132013   10084 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\cert-expiration-735200\proxy-client.key ...
	I0317 13:16:10.132013   10084 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\cert-expiration-735200\proxy-client.key: {Name:mk6f7d9fb1ec8ffb89e0bcbd199a6def3c149cef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:16:10.146619   10084 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8940.pem (1338 bytes)
	W0317 13:16:10.147022   10084 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8940_empty.pem, impossibly tiny 0 bytes
	I0317 13:16:10.147022   10084 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0317 13:16:10.147022   10084 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0317 13:16:10.147022   10084 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0317 13:16:10.148044   10084 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0317 13:16:10.148044   10084 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem (1708 bytes)
	I0317 13:16:10.150485   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0317 13:16:10.201841   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0317 13:16:10.253384   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0317 13:16:10.296325   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0317 13:16:10.342516   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\cert-expiration-735200\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0317 13:16:10.389999   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\cert-expiration-735200\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0317 13:16:10.438138   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\cert-expiration-735200\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0317 13:16:10.490212   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\cert-expiration-735200\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0317 13:16:10.540288   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0317 13:16:10.595094   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8940.pem --> /usr/share/ca-certificates/8940.pem (1338 bytes)
	I0317 13:16:10.644409   10084 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem --> /usr/share/ca-certificates/89402.pem (1708 bytes)
	I0317 13:16:10.691418   10084 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0317 13:16:10.734863   10084 ssh_runner.go:195] Run: openssl version
	I0317 13:16:10.756429   10084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/89402.pem && ln -fs /usr/share/ca-certificates/89402.pem /etc/ssl/certs/89402.pem"
	I0317 13:16:10.793498   10084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/89402.pem
	I0317 13:16:10.801501   10084 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 17 10:46 /usr/share/ca-certificates/89402.pem
	I0317 13:16:10.811483   10084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/89402.pem
	I0317 13:16:10.831433   10084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/89402.pem /etc/ssl/certs/3ec20f2e.0"
	I0317 13:16:10.862899   10084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0317 13:16:10.896714   10084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:16:10.907493   10084 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 17 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:16:10.919422   10084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:16:10.945921   10084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0317 13:16:10.978521   10084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8940.pem && ln -fs /usr/share/ca-certificates/8940.pem /etc/ssl/certs/8940.pem"
	I0317 13:16:11.009170   10084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8940.pem
	I0317 13:16:11.016802   10084 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 17 10:46 /usr/share/ca-certificates/8940.pem
	I0317 13:16:11.027980   10084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8940.pem
	I0317 13:16:11.048758   10084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8940.pem /etc/ssl/certs/51391683.0"
	I0317 13:16:11.083350   10084 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0317 13:16:11.091339   10084 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0317 13:16:11.091339   10084 kubeadm.go:392] StartCluster: {Name:cert-expiration-735200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:cert-exp
iration-735200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.26.33 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 13:16:11.100342   10084 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0317 13:16:11.141400   10084 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0317 13:16:11.171429   10084 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0317 13:16:11.203422   10084 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0317 13:16:11.227584   10084 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0317 13:16:11.227584   10084 kubeadm.go:157] found existing configuration files:
	
	I0317 13:16:11.239726   10084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0317 13:16:11.260551   10084 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0317 13:16:11.273179   10084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0317 13:16:11.317966   10084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0317 13:16:11.341628   10084 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0317 13:16:11.358696   10084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0317 13:16:11.394739   10084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0317 13:16:11.412816   10084 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0317 13:16:11.426816   10084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0317 13:16:11.456201   10084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0317 13:16:11.474816   10084 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0317 13:16:11.485877   10084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0317 13:16:11.503410   10084 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0317 13:16:12.000093   10084 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0317 13:16:13.395657    2276 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:16:13.395926    2276 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:13.396199    2276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-471400 ).networkadapters[0]).ipaddresses[0]
	I0317 13:16:16.023039    2276 main.go:141] libmachine: [stdout =====>] : 172.25.31.3
	
	I0317 13:16:16.023039    2276 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:16.029903    2276 main.go:141] libmachine: Using SSH client type: native
	I0317 13:16:16.030805    2276 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.31.3 22 <nil> <nil>}
	I0317 13:16:16.030805    2276 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0317 13:16:16.173038    2276 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0317 13:16:16.173038    2276 buildroot.go:70] root file system type: tmpfs
	I0317 13:16:16.173038    2276 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0317 13:16:16.173038    2276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-471400 ).state
	I0317 13:16:18.389773    2276 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:16:18.389830    2276 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:18.389963    2276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-471400 ).networkadapters[0]).ipaddresses[0]
	I0317 13:16:21.076529    2276 main.go:141] libmachine: [stdout =====>] : 172.25.31.3
	
	I0317 13:16:21.077479    2276 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:21.084373    2276 main.go:141] libmachine: Using SSH client type: native
	I0317 13:16:21.085064    2276 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.31.3 22 <nil> <nil>}
	I0317 13:16:21.085064    2276 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0317 13:16:21.275965    2276 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0317 13:16:21.276059    2276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-471400 ).state
	I0317 13:16:26.867476   10084 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0317 13:16:26.867476   10084 kubeadm.go:310] [preflight] Running pre-flight checks
	I0317 13:16:26.868119   10084 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0317 13:16:26.868490   10084 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0317 13:16:26.868490   10084 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0317 13:16:26.868490   10084 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0317 13:16:26.871592   10084 out.go:235]   - Generating certificates and keys ...
	I0317 13:16:26.872512   10084 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0317 13:16:26.872683   10084 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0317 13:16:26.872852   10084 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0317 13:16:26.872852   10084 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0317 13:16:26.873409   10084 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0317 13:16:26.873503   10084 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0317 13:16:26.873503   10084 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0317 13:16:26.874163   10084 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [cert-expiration-735200 localhost] and IPs [172.25.26.33 127.0.0.1 ::1]
	I0317 13:16:26.874163   10084 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0317 13:16:26.874756   10084 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [cert-expiration-735200 localhost] and IPs [172.25.26.33 127.0.0.1 ::1]
	I0317 13:16:26.874756   10084 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0317 13:16:26.874756   10084 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0317 13:16:26.874756   10084 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0317 13:16:26.875430   10084 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0317 13:16:26.875508   10084 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0317 13:16:26.875508   10084 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0317 13:16:26.875508   10084 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0317 13:16:26.876044   10084 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0317 13:16:26.876148   10084 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0317 13:16:26.876302   10084 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0317 13:16:26.876302   10084 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0317 13:16:26.898707   10084 out.go:235]   - Booting up control plane ...
	I0317 13:16:26.899692   10084 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0317 13:16:26.899757   10084 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0317 13:16:26.899757   10084 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0317 13:16:26.900475   10084 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0317 13:16:26.900609   10084 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0317 13:16:26.900854   10084 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0317 13:16:26.901252   10084 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0317 13:16:26.901706   10084 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0317 13:16:26.902329   10084 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002510142s
	I0317 13:16:26.902473   10084 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0317 13:16:26.902655   10084 kubeadm.go:310] [api-check] The API server is healthy after 7.502380999s
	I0317 13:16:26.902943   10084 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0317 13:16:26.903533   10084 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0317 13:16:26.903799   10084 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0317 13:16:26.904605   10084 kubeadm.go:310] [mark-control-plane] Marking the node cert-expiration-735200 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0317 13:16:26.904742   10084 kubeadm.go:310] [bootstrap-token] Using token: dn2h2j.b5b63hbxefnjchqa
	I0317 13:16:26.907543   10084 out.go:235]   - Configuring RBAC rules ...
	I0317 13:16:26.907997   10084 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0317 13:16:26.908220   10084 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0317 13:16:26.908220   10084 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0317 13:16:26.908929   10084 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0317 13:16:26.909475   10084 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0317 13:16:26.909670   10084 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0317 13:16:26.909670   10084 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0317 13:16:26.909670   10084 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0317 13:16:26.910216   10084 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0317 13:16:26.910216   10084 kubeadm.go:310] 
	I0317 13:16:26.910283   10084 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0317 13:16:26.910283   10084 kubeadm.go:310] 
	I0317 13:16:26.910283   10084 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0317 13:16:26.910283   10084 kubeadm.go:310] 
	I0317 13:16:26.910910   10084 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0317 13:16:26.910910   10084 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0317 13:16:26.910910   10084 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0317 13:16:26.910910   10084 kubeadm.go:310] 
	I0317 13:16:26.910910   10084 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0317 13:16:26.910910   10084 kubeadm.go:310] 
	I0317 13:16:26.910910   10084 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0317 13:16:26.911483   10084 kubeadm.go:310] 
	I0317 13:16:26.911628   10084 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0317 13:16:26.911628   10084 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0317 13:16:26.911628   10084 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0317 13:16:26.911628   10084 kubeadm.go:310] 
	I0317 13:16:26.911628   10084 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0317 13:16:26.911628   10084 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0317 13:16:26.911628   10084 kubeadm.go:310] 
	I0317 13:16:26.912558   10084 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token dn2h2j.b5b63hbxefnjchqa \
	I0317 13:16:26.912558   10084 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c322b0259bb8a6b4c6c1dc77ade13bbf0d2f6b9bd2605c58fcd3743199330256 \
	I0317 13:16:26.912558   10084 kubeadm.go:310] 	--control-plane 
	I0317 13:16:26.912558   10084 kubeadm.go:310] 
	I0317 13:16:26.912558   10084 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0317 13:16:26.912558   10084 kubeadm.go:310] 
	I0317 13:16:26.913552   10084 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token dn2h2j.b5b63hbxefnjchqa \
	I0317 13:16:26.913552   10084 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c322b0259bb8a6b4c6c1dc77ade13bbf0d2f6b9bd2605c58fcd3743199330256 
	I0317 13:16:26.913552   10084 cni.go:84] Creating CNI manager for ""
	I0317 13:16:26.913552   10084 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0317 13:16:26.917550   10084 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0317 13:16:23.537836    2276 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:16:23.537836    2276 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:23.537836    2276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-471400 ).networkadapters[0]).ipaddresses[0]
	I0317 13:16:26.246485    2276 main.go:141] libmachine: [stdout =====>] : 172.25.31.3
	
	I0317 13:16:26.247494    2276 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:26.253661    2276 main.go:141] libmachine: Using SSH client type: native
	I0317 13:16:26.254316    2276 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.31.3 22 <nil> <nil>}
	I0317 13:16:26.254316    2276 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0317 13:16:26.408871    2276 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0317 13:16:26.408871    2276 machine.go:96] duration metric: took 46.3759547s to provisionDockerMachine
	I0317 13:16:26.408871    2276 start.go:293] postStartSetup for "pause-471400" (driver="hyperv")
	I0317 13:16:26.408871    2276 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0317 13:16:26.421819    2276 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0317 13:16:26.421819    2276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-471400 ).state
	I0317 13:16:26.931531   10084 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0317 13:16:26.952139   10084 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0317 13:16:26.995391   10084 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0317 13:16:27.009696   10084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 13:16:27.012872   10084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes cert-expiration-735200 minikube.k8s.io/updated_at=2025_03_17T13_16_26_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=28b3ce799b018a38b7c40f89b465976263272e76 minikube.k8s.io/name=cert-expiration-735200 minikube.k8s.io/primary=true
	I0317 13:16:27.039086   10084 ops.go:34] apiserver oom_adj: -16
	I0317 13:16:27.468118   10084 kubeadm.go:1113] duration metric: took 472.5592ms to wait for elevateKubeSystemPrivileges
	I0317 13:16:27.468199   10084 kubeadm.go:394] duration metric: took 16.3766765s to StartCluster
	I0317 13:16:27.468264   10084 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:16:27.468487   10084 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0317 13:16:27.471105   10084 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:16:27.472633   10084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0317 13:16:27.472633   10084 start.go:235] Will wait 6m0s for node &{Name: IP:172.25.26.33 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0317 13:16:27.472756   10084 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0317 13:16:27.472851   10084 addons.go:69] Setting storage-provisioner=true in profile "cert-expiration-735200"
	I0317 13:16:27.472979   10084 addons.go:238] Setting addon storage-provisioner=true in "cert-expiration-735200"
	I0317 13:16:27.472979   10084 host.go:66] Checking if "cert-expiration-735200" exists ...
	I0317 13:16:27.474900   10084 addons.go:69] Setting default-storageclass=true in profile "cert-expiration-735200"
	I0317 13:16:27.474900   10084 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cert-expiration-735200"
	I0317 13:16:27.475046   10084 config.go:182] Loaded profile config "cert-expiration-735200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0317 13:16:27.476358   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-735200 ).state
	I0317 13:16:27.477483   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-735200 ).state
	I0317 13:16:27.477483   10084 out.go:177] * Verifying Kubernetes components...
	I0317 13:16:27.500270   10084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:16:27.776476   10084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.25.16.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0317 13:16:27.969681   10084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 13:16:28.456554   10084 start.go:971] {"host.minikube.internal": 172.25.16.1} host record injected into CoreDNS's ConfigMap
	I0317 13:16:28.461767   10084 api_server.go:52] waiting for apiserver process to appear ...
	I0317 13:16:28.473140   10084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:16:28.519497   10084 api_server.go:72] duration metric: took 1.0466338s to wait for apiserver process to appear ...
	I0317 13:16:28.519497   10084 api_server.go:88] waiting for apiserver healthz status ...
	I0317 13:16:28.519614   10084 api_server.go:253] Checking apiserver healthz at https://172.25.26.33:8443/healthz ...
	I0317 13:16:28.529782   10084 api_server.go:279] https://172.25.26.33:8443/healthz returned 200:
	ok
	I0317 13:16:28.532492   10084 api_server.go:141] control plane version: v1.32.2
	I0317 13:16:28.532492   10084 api_server.go:131] duration metric: took 12.9945ms to wait for apiserver health ...
	I0317 13:16:28.532586   10084 system_pods.go:43] waiting for kube-system pods to appear ...
	I0317 13:16:28.539203   10084 system_pods.go:59] 4 kube-system pods found
	I0317 13:16:28.539203   10084 system_pods.go:61] "etcd-cert-expiration-735200" [8e6cd2c1-d5aa-4d29-ab6c-55d11fdfb4a7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0317 13:16:28.539203   10084 system_pods.go:61] "kube-apiserver-cert-expiration-735200" [1ed63eb0-2c99-40d5-9855-c8cea8f64d13] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0317 13:16:28.539203   10084 system_pods.go:61] "kube-controller-manager-cert-expiration-735200" [3625e5bd-4ca0-4167-a45d-9a0a5965d35a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0317 13:16:28.539203   10084 system_pods.go:61] "kube-scheduler-cert-expiration-735200" [990b6835-e0d8-4354-90b2-f2a3b3422304] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0317 13:16:28.539203   10084 system_pods.go:74] duration metric: took 6.6175ms to wait for pod list to return data ...
	I0317 13:16:28.539203   10084 kubeadm.go:582] duration metric: took 1.06634s to wait for: map[apiserver:true system_pods:true]
	I0317 13:16:28.539203   10084 node_conditions.go:102] verifying NodePressure condition ...
	I0317 13:16:28.544027   10084 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0317 13:16:28.544027   10084 node_conditions.go:123] node cpu capacity is 2
	I0317 13:16:28.544027   10084 node_conditions.go:105] duration metric: took 4.8239ms to run NodePressure ...
	I0317 13:16:28.544027   10084 start.go:241] waiting for startup goroutines ...
	I0317 13:16:28.965274   10084 kapi.go:214] "coredns" deployment in "kube-system" namespace and "cert-expiration-735200" context rescaled to 1 replicas
	I0317 13:16:30.002822   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:16:30.002822   10084 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:30.005761   10084 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0317 13:16:28.884197    2276 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:16:28.884594    2276 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:28.884659    2276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-471400 ).networkadapters[0]).ipaddresses[0]
	I0317 13:16:31.888974    2276 main.go:141] libmachine: [stdout =====>] : 172.25.31.3
	
	I0317 13:16:31.888974    2276 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:31.888974    2276 sshutil.go:53] new ssh client: &{IP:172.25.31.3 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\pause-471400\id_rsa Username:docker}
	I0317 13:16:30.012112   10084 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0317 13:16:30.012112   10084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0317 13:16:30.012112   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-735200 ).state
	I0317 13:16:30.028399   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:16:30.028399   10084 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:30.030879   10084 addons.go:238] Setting addon default-storageclass=true in "cert-expiration-735200"
	I0317 13:16:30.030879   10084 host.go:66] Checking if "cert-expiration-735200" exists ...
	I0317 13:16:30.032275   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-735200 ).state
	I0317 13:16:32.017629    2276 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.5957478s)
	I0317 13:16:32.031225    2276 ssh_runner.go:195] Run: cat /etc/os-release
	I0317 13:16:32.039551    2276 info.go:137] Remote host: Buildroot 2023.02.9
	I0317 13:16:32.039551    2276 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0317 13:16:32.039551    2276 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0317 13:16:32.041782    2276 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem -> 89402.pem in /etc/ssl/certs
	I0317 13:16:32.053772    2276 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0317 13:16:32.081080    2276 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem --> /etc/ssl/certs/89402.pem (1708 bytes)
	I0317 13:16:32.141719    2276 start.go:296] duration metric: took 5.7327834s for postStartSetup
	I0317 13:16:32.141841    2276 fix.go:56] duration metric: took 54.4615566s for fixHost
	I0317 13:16:32.141916    2276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-471400 ).state
	I0317 13:16:34.678109    2276 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:16:34.678324    2276 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:34.678324    2276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-471400 ).networkadapters[0]).ipaddresses[0]
	I0317 13:16:32.662000   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:16:32.662000   10084 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:32.662465   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-735200 ).networkadapters[0]).ipaddresses[0]
	I0317 13:16:32.666262   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:16:32.666262   10084 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:32.666335   10084 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0317 13:16:32.666335   10084 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0317 13:16:32.666393   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-735200 ).state
	I0317 13:16:35.202742   10084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:16:35.202742   10084 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:35.202862   10084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-735200 ).networkadapters[0]).ipaddresses[0]
	I0317 13:16:35.646605   10084 main.go:141] libmachine: [stdout =====>] : 172.25.26.33
	
	I0317 13:16:35.646889   10084 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:35.647418   10084 sshutil.go:53] new ssh client: &{IP:172.25.26.33 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\cert-expiration-735200\id_rsa Username:docker}
	I0317 13:16:35.805277   10084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0317 13:16:38.121052   10084 main.go:141] libmachine: [stdout =====>] : 172.25.26.33
	
	I0317 13:16:38.121052   10084 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:38.121602   10084 sshutil.go:53] new ssh client: &{IP:172.25.26.33 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\cert-expiration-735200\id_rsa Username:docker}
	I0317 13:16:38.285249   10084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0317 13:16:38.472218   10084 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0317 13:16:38.475871   10084 addons.go:514] duration metric: took 11.0029913s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0317 13:16:38.475993   10084 start.go:246] waiting for cluster config update ...
	I0317 13:16:38.475993   10084 start.go:255] writing updated cluster config ...
	I0317 13:16:38.487912   10084 ssh_runner.go:195] Run: rm -f paused
	I0317 13:16:38.647555   10084 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0317 13:16:38.652513   10084 out.go:177] * Done! kubectl is now configured to use "cert-expiration-735200" cluster and "default" namespace by default
	I0317 13:16:37.562007    2276 main.go:141] libmachine: [stdout =====>] : 172.25.31.3
	
	I0317 13:16:37.562651    2276 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:37.570741    2276 main.go:141] libmachine: Using SSH client type: native
	I0317 13:16:37.571307    2276 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.31.3 22 <nil> <nil>}
	I0317 13:16:37.571307    2276 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0317 13:16:37.724030    2276 main.go:141] libmachine: SSH cmd err, output: <nil>: 1742217397.748398288
	
	I0317 13:16:37.724117    2276 fix.go:216] guest clock: 1742217397.748398288
	I0317 13:16:37.724117    2276 fix.go:229] Guest: 2025-03-17 13:16:37.748398288 +0000 UTC Remote: 2025-03-17 13:16:32.1418416 +0000 UTC m=+320.366006801 (delta=5.606556688s)
	I0317 13:16:37.724238    2276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-471400 ).state
	I0317 13:16:39.990292    2276 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:16:39.990525    2276 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:39.990525    2276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-471400 ).networkadapters[0]).ipaddresses[0]
	I0317 13:16:42.747631    8032 start.go:364] duration metric: took 4m17.4256234s to acquireMachinesLock for "kubernetes-upgrade-816300"
	I0317 13:16:42.748084    8032 start.go:96] Skipping create...Using existing machine configuration
	I0317 13:16:42.748125    8032 fix.go:54] fixHost starting: 
	I0317 13:16:42.749144    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-816300 ).state
	I0317 13:16:44.980088    8032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:16:44.980321    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:44.980390    8032 fix.go:112] recreateIfNeeded on kubernetes-upgrade-816300: state=Running err=<nil>
	W0317 13:16:44.980390    8032 fix.go:138] unexpected machine state, will restart: <nil>
	I0317 13:16:44.984751    8032 out.go:177] * Updating the running hyperv "kubernetes-upgrade-816300" VM ...
	I0317 13:16:42.589156    2276 main.go:141] libmachine: [stdout =====>] : 172.25.31.3
	
	I0317 13:16:42.589219    2276 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:42.597230    2276 main.go:141] libmachine: Using SSH client type: native
	I0317 13:16:42.597230    2276 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.31.3 22 <nil> <nil>}
	I0317 13:16:42.597230    2276 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1742217397
	I0317 13:16:42.747276    2276 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Mar 17 13:16:37 UTC 2025
	
	I0317 13:16:42.747370    2276 fix.go:236] clock set: Mon Mar 17 13:16:37 UTC 2025
	 (err=<nil>)
	I0317 13:16:42.747370    2276 start.go:83] releasing machines lock for "pause-471400", held for 1m5.067322s
	I0317 13:16:42.747631    2276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-471400 ).state
	I0317 13:16:44.990180    2276 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:16:44.990180    2276 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:44.990180    2276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-471400 ).networkadapters[0]).ipaddresses[0]
	I0317 13:16:44.986912    8032 machine.go:93] provisionDockerMachine start ...
	I0317 13:16:44.986912    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-816300 ).state
	I0317 13:16:47.323165    8032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:16:47.324072    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:47.324293    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-816300 ).networkadapters[0]).ipaddresses[0]
	I0317 13:16:47.705533    2276 main.go:141] libmachine: [stdout =====>] : 172.25.31.3
	
	I0317 13:16:47.705533    2276 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:47.711963    2276 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0317 13:16:47.712167    2276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-471400 ).state
	I0317 13:16:47.725066    2276 ssh_runner.go:195] Run: cat /version.json
	I0317 13:16:47.725066    2276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-471400 ).state
	I0317 13:16:50.135254    2276 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:16:50.135354    2276 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:50.135575    2276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-471400 ).networkadapters[0]).ipaddresses[0]
	I0317 13:16:50.136208    2276 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:16:50.136525    2276 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:50.136525    2276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-471400 ).networkadapters[0]).ipaddresses[0]
	I0317 13:16:50.214206    8032 main.go:141] libmachine: [stdout =====>] : 172.25.31.15
	
	I0317 13:16:50.214669    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:50.221082    8032 main.go:141] libmachine: Using SSH client type: native
	I0317 13:16:50.221578    8032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.31.15 22 <nil> <nil>}
	I0317 13:16:50.221578    8032 main.go:141] libmachine: About to run SSH command:
	hostname
	I0317 13:16:50.360096    8032 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-816300
	
	I0317 13:16:50.360096    8032 buildroot.go:166] provisioning hostname "kubernetes-upgrade-816300"
	I0317 13:16:50.360096    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-816300 ).state
	I0317 13:16:52.766650    8032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:16:52.767101    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:52.767101    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-816300 ).networkadapters[0]).ipaddresses[0]
	I0317 13:16:52.938839    2276 main.go:141] libmachine: [stdout =====>] : 172.25.31.3
	
	I0317 13:16:52.938839    2276 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:52.939792    2276 sshutil.go:53] new ssh client: &{IP:172.25.31.3 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\pause-471400\id_rsa Username:docker}
	I0317 13:16:52.969403    2276 main.go:141] libmachine: [stdout =====>] : 172.25.31.3
	
	I0317 13:16:52.969403    2276 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:52.969403    2276 sshutil.go:53] new ssh client: &{IP:172.25.31.3 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\pause-471400\id_rsa Username:docker}
	I0317 13:16:53.037516    2276 ssh_runner.go:235] Completed: cat /version.json: (5.31239s)
	I0317 13:16:53.049341    2276 ssh_runner.go:195] Run: systemctl --version
	I0317 13:16:53.054990    2276 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.3428967s)
	W0317 13:16:53.054990    2276 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0317 13:16:53.076344    2276 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0317 13:16:53.086196    2276 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0317 13:16:53.098700    2276 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0317 13:16:53.122530    2276 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0317 13:16:53.122530    2276 start.go:495] detecting cgroup driver to use...
	I0317 13:16:53.122530    2276 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0317 13:16:53.169780    2276 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0317 13:16:53.169780    2276 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0317 13:16:53.172931    2276 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0317 13:16:53.208885    2276 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0317 13:16:53.231463    2276 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0317 13:16:53.242982    2276 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0317 13:16:53.285654    2276 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0317 13:16:53.320532    2276 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0317 13:16:53.354949    2276 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0317 13:16:53.398468    2276 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0317 13:16:53.435765    2276 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0317 13:16:53.470095    2276 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0317 13:16:53.503234    2276 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0317 13:16:53.540051    2276 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0317 13:16:53.570936    2276 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0317 13:16:53.603642    2276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:16:53.888025    2276 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0317 13:16:53.921380    2276 start.go:495] detecting cgroup driver to use...
	I0317 13:16:53.933070    2276 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0317 13:16:53.973904    2276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0317 13:16:54.012103    2276 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0317 13:16:54.068213    2276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0317 13:16:54.117839    2276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0317 13:16:54.145797    2276 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0317 13:16:54.206196    2276 ssh_runner.go:195] Run: which cri-dockerd
	I0317 13:16:54.225233    2276 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0317 13:16:54.245716    2276 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0317 13:16:54.297550    2276 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0317 13:16:54.582515    2276 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0317 13:16:54.862931    2276 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0317 13:16:54.863202    2276 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0317 13:16:54.910896    2276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:16:55.195275    2276 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0317 13:16:55.403447    8032 main.go:141] libmachine: [stdout =====>] : 172.25.31.15
	
	I0317 13:16:55.404199    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:55.412586    8032 main.go:141] libmachine: Using SSH client type: native
	I0317 13:16:55.413488    8032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.31.15 22 <nil> <nil>}
	I0317 13:16:55.413488    8032 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-816300 && echo "kubernetes-upgrade-816300" | sudo tee /etc/hostname
	I0317 13:16:55.577055    8032 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-816300
	
	I0317 13:16:55.577055    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-816300 ).state
	I0317 13:16:57.826808    8032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:16:57.826808    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:16:57.827078    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-816300 ).networkadapters[0]).ipaddresses[0]
	I0317 13:17:00.390013    8032 main.go:141] libmachine: [stdout =====>] : 172.25.31.15
	
	I0317 13:17:00.390013    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:17:00.396745    8032 main.go:141] libmachine: Using SSH client type: native
	I0317 13:17:00.397211    8032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.31.15 22 <nil> <nil>}
	I0317 13:17:00.397332    8032 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-816300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-816300/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-816300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0317 13:17:00.527700    8032 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0317 13:17:00.527700    8032 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0317 13:17:00.527772    8032 buildroot.go:174] setting up certificates
	I0317 13:17:00.527947    8032 provision.go:84] configureAuth start
	I0317 13:17:00.528018    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-816300 ).state
	I0317 13:17:02.745460    8032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:17:02.745713    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:17:02.745773    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-816300 ).networkadapters[0]).ipaddresses[0]
	I0317 13:17:05.343749    8032 main.go:141] libmachine: [stdout =====>] : 172.25.31.15
	
	I0317 13:17:05.343749    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:17:05.344406    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-816300 ).state
	I0317 13:17:07.588141    8032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:17:07.588141    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:17:07.588479    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-816300 ).networkadapters[0]).ipaddresses[0]
	I0317 13:17:08.331869    2276 ssh_runner.go:235] Completed: sudo systemctl restart docker: (13.1363806s)
	I0317 13:17:08.344028    2276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0317 13:17:08.390206    2276 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0317 13:17:08.439766    2276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0317 13:17:08.478915    2276 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0317 13:17:08.702497    2276 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0317 13:17:08.935046    2276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:17:09.149392    2276 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0317 13:17:09.194595    2276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0317 13:17:09.234225    2276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:17:09.461143    2276 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0317 13:17:09.599960    2276 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0317 13:17:09.612335    2276 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0317 13:17:09.621164    2276 start.go:563] Will wait 60s for crictl version
	I0317 13:17:09.633829    2276 ssh_runner.go:195] Run: which crictl
	I0317 13:17:09.653651    2276 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0317 13:17:09.714488    2276 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0317 13:17:09.725250    2276 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0317 13:17:09.775143    2276 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0317 13:17:09.818722    2276 out.go:235] * Preparing Kubernetes v1.32.2 on Docker 27.4.0 ...
	I0317 13:17:09.819255    2276 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0317 13:17:09.825335    2276 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0317 13:17:09.825335    2276 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0317 13:17:09.825335    2276 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0317 13:17:09.825335    2276 ip.go:211] Found interface: {Index:13 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:4b:84:d5 Flags:up|broadcast|multicast|running}
	I0317 13:17:09.829358    2276 ip.go:214] interface addr: fe80::f0c7:c31c:6237:ef35/64
	I0317 13:17:09.829358    2276 ip.go:214] interface addr: 172.25.16.1/20
	I0317 13:17:09.839894    2276 ssh_runner.go:195] Run: grep 172.25.16.1	host.minikube.internal$ /etc/hosts
	I0317 13:17:09.847495    2276 kubeadm.go:883] updating cluster {Name:pause-471400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:pause-471400 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.31.3 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-s
ecurity-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0317 13:17:09.847687    2276 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0317 13:17:09.857476    2276 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0317 13:17:09.887415    2276 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.32.2
	registry.k8s.io/kube-controller-manager:v1.32.2
	registry.k8s.io/kube-scheduler:v1.32.2
	registry.k8s.io/kube-proxy:v1.32.2
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0317 13:17:09.887415    2276 docker.go:619] Images already preloaded, skipping extraction
	I0317 13:17:09.897657    2276 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0317 13:17:09.926302    2276 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.32.2
	registry.k8s.io/kube-proxy:v1.32.2
	registry.k8s.io/kube-controller-manager:v1.32.2
	registry.k8s.io/kube-scheduler:v1.32.2
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0317 13:17:09.926360    2276 cache_images.go:84] Images are preloaded, skipping loading
	I0317 13:17:09.926360    2276 kubeadm.go:934] updating node { 172.25.31.3 8443 v1.32.2 docker true true} ...
	I0317 13:17:09.926651    2276 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-471400 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.31.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:pause-471400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0317 13:17:09.938536    2276 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0317 13:17:10.007970    2276 cni.go:84] Creating CNI manager for ""
	I0317 13:17:10.008153    2276 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0317 13:17:10.008222    2276 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0317 13:17:10.008222    2276 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.25.31.3 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-471400 NodeName:pause-471400 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.25.31.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.25.31.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0317 13:17:10.008222    2276 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.25.31.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "pause-471400"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "172.25.31.3"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.25.31.3"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0317 13:17:10.020853    2276 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0317 13:17:10.042874    2276 binaries.go:44] Found k8s binaries, skipping transfer
	I0317 13:17:10.055851    2276 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0317 13:17:10.080879    2276 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I0317 13:17:10.119024    2276 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0317 13:17:10.156661    2276 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I0317 13:17:10.206751    2276 ssh_runner.go:195] Run: grep 172.25.31.3	control-plane.minikube.internal$ /etc/hosts
	I0317 13:17:10.227921    2276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:17:10.471823    2276 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 13:17:10.512237    2276 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\pause-471400 for IP: 172.25.31.3
	I0317 13:17:10.512366    2276 certs.go:194] generating shared ca certs ...
	I0317 13:17:10.512366    2276 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:17:10.513072    2276 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0317 13:17:10.513072    2276 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0317 13:17:10.514001    2276 certs.go:256] generating profile certs ...
	I0317 13:17:10.514001    2276 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\pause-471400\client.key
	I0317 13:17:10.515006    2276 certs.go:359] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\pause-471400\apiserver.key.8fb62966
	I0317 13:17:10.515006    2276 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\pause-471400\proxy-client.key
	I0317 13:17:10.518077    2276 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8940.pem (1338 bytes)
	W0317 13:17:10.518619    2276 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8940_empty.pem, impossibly tiny 0 bytes
	I0317 13:17:10.518829    2276 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0317 13:17:10.519261    2276 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0317 13:17:10.519818    2276 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0317 13:17:10.520395    2276 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0317 13:17:10.521375    2276 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem (1708 bytes)
	I0317 13:17:10.524477    2276 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0317 13:17:10.579838    2276 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0317 13:17:10.634157    2276 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0317 13:17:10.687403    2276 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0317 13:17:10.738154    2276 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\pause-471400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0317 13:17:10.789417    2276 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\pause-471400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0317 13:17:10.839155    2276 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\pause-471400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0317 13:17:10.889720    2276 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\pause-471400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0317 13:17:10.942549    2276 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8940.pem --> /usr/share/ca-certificates/8940.pem (1338 bytes)
	I0317 13:17:10.995256    2276 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem --> /usr/share/ca-certificates/89402.pem (1708 bytes)
	I0317 13:17:11.047503    2276 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0317 13:17:11.103120    2276 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0317 13:17:11.151731    2276 ssh_runner.go:195] Run: openssl version
	I0317 13:17:11.178772    2276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8940.pem && ln -fs /usr/share/ca-certificates/8940.pem /etc/ssl/certs/8940.pem"
	I0317 13:17:11.213088    2276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8940.pem
	I0317 13:17:11.221418    2276 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 17 10:46 /usr/share/ca-certificates/8940.pem
	I0317 13:17:11.236292    2276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8940.pem
	I0317 13:17:11.268836    2276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8940.pem /etc/ssl/certs/51391683.0"
	I0317 13:17:11.302583    2276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/89402.pem && ln -fs /usr/share/ca-certificates/89402.pem /etc/ssl/certs/89402.pem"
	I0317 13:17:11.339345    2276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/89402.pem
	I0317 13:17:11.347411    2276 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 17 10:46 /usr/share/ca-certificates/89402.pem
	I0317 13:17:11.360829    2276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/89402.pem
	I0317 13:17:11.383132    2276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/89402.pem /etc/ssl/certs/3ec20f2e.0"
	I0317 13:17:11.415690    2276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0317 13:17:11.448945    2276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:17:11.456845    2276 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 17 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:17:11.469118    2276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:17:11.490233    2276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0317 13:17:11.519232    2276 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0317 13:17:11.540048    2276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0317 13:17:11.559161    2276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0317 13:17:11.580548    2276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0317 13:17:11.601929    2276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0317 13:17:11.624374    2276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0317 13:17:11.644747    2276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0317 13:17:11.653844    2276 kubeadm.go:392] StartCluster: {Name:pause-471400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:pause-471400 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.31.3 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-secu
rity-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 13:17:11.664527    2276 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0317 13:17:11.707129    2276 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0317 13:17:11.725928    2276 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0317 13:17:11.725990    2276 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0317 13:17:11.737844    2276 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0317 13:17:11.757601    2276 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0317 13:17:11.759156    2276 kubeconfig.go:125] found "pause-471400" server: "https://172.25.31.3:8443"
	I0317 13:17:11.762253    2276 kapi.go:59] client config for pause-471400: &rest.Config{Host:"https://172.25.31.3:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\pause-471400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\pause-471400\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2e292e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0317 13:17:11.764250    2276 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0317 13:17:11.764250    2276 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0317 13:17:11.764250    2276 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0317 13:17:11.764364    2276 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0317 13:17:11.776295    2276 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0317 13:17:11.796925    2276 kubeadm.go:630] The running cluster does not require reconfiguration: 172.25.31.3
	I0317 13:17:11.797895    2276 kubeadm.go:1160] stopping kube-system containers ...
	I0317 13:17:11.806877    2276 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0317 13:17:11.844542    2276 docker.go:483] Stopping containers: [d4f557bada23 bec7db06f9e9 0b09a5de1f0b c0ee58f77451 cecd0f7a3b60 48490bf5143c a7133843d6ed 5d96f9d335df c59c53abe2dd a77930f3d721 587f0dda7141 4a105f3090f3]
	I0317 13:17:11.859204    2276 ssh_runner.go:195] Run: docker stop d4f557bada23 bec7db06f9e9 0b09a5de1f0b c0ee58f77451 cecd0f7a3b60 48490bf5143c a7133843d6ed 5d96f9d335df c59c53abe2dd a77930f3d721 587f0dda7141 4a105f3090f3
	I0317 13:17:11.905716    2276 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0317 13:17:10.354698    8032 main.go:141] libmachine: [stdout =====>] : 172.25.31.15
	
	I0317 13:17:10.354698    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:17:10.354698    8032 provision.go:143] copyHostCerts
	I0317 13:17:10.355720    8032 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0317 13:17:10.355720    8032 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0317 13:17:10.356483    8032 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0317 13:17:10.358133    8032 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0317 13:17:10.358188    8032 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0317 13:17:10.358415    8032 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0317 13:17:10.360294    8032 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0317 13:17:10.360294    8032 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0317 13:17:10.360294    8032 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0317 13:17:10.362254    8032 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.kubernetes-upgrade-816300 san=[127.0.0.1 172.25.31.15 kubernetes-upgrade-816300 localhost minikube]
	I0317 13:17:10.492765    8032 provision.go:177] copyRemoteCerts
	I0317 13:17:10.504249    8032 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0317 13:17:10.504249    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-816300 ).state
	I0317 13:17:12.877622    8032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:17:12.877722    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:17:12.877779    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-816300 ).networkadapters[0]).ipaddresses[0]
	I0317 13:17:11.981288    2276 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0317 13:17:12.004626    2276 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5651 Mar 17 13:10 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5655 Mar 17 13:10 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Mar 17 13:10 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5603 Mar 17 13:10 /etc/kubernetes/scheduler.conf
	
	I0317 13:17:12.016225    2276 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0317 13:17:12.052486    2276 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0317 13:17:12.084264    2276 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0317 13:17:12.102367    2276 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0317 13:17:12.113372    2276 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0317 13:17:12.146394    2276 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0317 13:17:12.166152    2276 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0317 13:17:12.179056    2276 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0317 13:17:12.207618    2276 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0317 13:17:12.228938    2276 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0317 13:17:12.643075    2276 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0317 13:17:13.971104    2276 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.3280142s)
	I0317 13:17:13.971104    2276 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0317 13:17:14.302698    2276 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0317 13:17:14.406407    2276 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0317 13:17:14.508662    2276 api_server.go:52] waiting for apiserver process to appear ...
	I0317 13:17:14.519080    2276 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:17:15.025301    2276 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:17:15.523753    2276 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:17:16.019967    2276 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:17:16.520248    2276 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:17:16.548439    2276 api_server.go:72] duration metric: took 2.0397543s to wait for apiserver process to appear ...
	I0317 13:17:16.548565    2276 api_server.go:88] waiting for apiserver healthz status ...
	I0317 13:17:16.548633    2276 api_server.go:253] Checking apiserver healthz at https://172.25.31.3:8443/healthz ...
	I0317 13:17:15.528908    8032 main.go:141] libmachine: [stdout =====>] : 172.25.31.15
	
	I0317 13:17:15.529579    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:17:15.530037    8032 sshutil.go:53] new ssh client: &{IP:172.25.31.15 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\kubernetes-upgrade-816300\id_rsa Username:docker}
	I0317 13:17:15.641896    8032 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.1375892s)
	I0317 13:17:15.642426    8032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1241 bytes)
	I0317 13:17:15.695164    8032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0317 13:17:15.747508    8032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0317 13:17:15.802865    8032 provision.go:87] duration metric: took 15.2747475s to configureAuth
	I0317 13:17:15.802865    8032 buildroot.go:189] setting minikube options for container-runtime
	I0317 13:17:15.803856    8032 config.go:182] Loaded profile config "kubernetes-upgrade-816300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0317 13:17:15.803856    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-816300 ).state
	I0317 13:17:18.099138    8032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:17:18.099138    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:17:18.099815    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-816300 ).networkadapters[0]).ipaddresses[0]
	I0317 13:17:19.729867    2276 api_server.go:279] https://172.25.31.3:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0317 13:17:19.729952    2276 api_server.go:103] status: https://172.25.31.3:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0317 13:17:19.729952    2276 api_server.go:253] Checking apiserver healthz at https://172.25.31.3:8443/healthz ...
	I0317 13:17:19.882354    2276 api_server.go:279] https://172.25.31.3:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0317 13:17:19.882452    2276 api_server.go:103] status: https://172.25.31.3:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0317 13:17:20.048918    2276 api_server.go:253] Checking apiserver healthz at https://172.25.31.3:8443/healthz ...
	I0317 13:17:20.057566    2276 api_server.go:279] https://172.25.31.3:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0317 13:17:20.057825    2276 api_server.go:103] status: https://172.25.31.3:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0317 13:17:20.548877    2276 api_server.go:253] Checking apiserver healthz at https://172.25.31.3:8443/healthz ...
	I0317 13:17:20.564020    2276 api_server.go:279] https://172.25.31.3:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0317 13:17:20.564020    2276 api_server.go:103] status: https://172.25.31.3:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0317 13:17:21.050015    2276 api_server.go:253] Checking apiserver healthz at https://172.25.31.3:8443/healthz ...
	I0317 13:17:21.058908    2276 api_server.go:279] https://172.25.31.3:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0317 13:17:21.059876    2276 api_server.go:103] status: https://172.25.31.3:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0317 13:17:21.549308    2276 api_server.go:253] Checking apiserver healthz at https://172.25.31.3:8443/healthz ...
	I0317 13:17:21.559326    2276 api_server.go:279] https://172.25.31.3:8443/healthz returned 200:
	ok
	I0317 13:17:21.571712    2276 api_server.go:141] control plane version: v1.32.2
	I0317 13:17:21.571712    2276 api_server.go:131] duration metric: took 5.0230907s to wait for apiserver health ...
	I0317 13:17:21.571712    2276 cni.go:84] Creating CNI manager for ""
	I0317 13:17:21.571712    2276 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0317 13:17:21.575214    2276 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0317 13:17:21.590464    2276 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0317 13:17:21.617169    2276 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0317 13:17:21.653927    2276 system_pods.go:43] waiting for kube-system pods to appear ...
	I0317 13:17:21.660057    2276 system_pods.go:59] 6 kube-system pods found
	I0317 13:17:21.660057    2276 system_pods.go:61] "coredns-668d6bf9bc-2xpj4" [704a1878-5d2f-4871-98ac-ced7ddfbc684] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 13:17:21.660057    2276 system_pods.go:61] "etcd-pause-471400" [98e4a9fc-1ef0-4a40-a394-634314ddd363] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0317 13:17:21.660057    2276 system_pods.go:61] "kube-apiserver-pause-471400" [032360c0-bb5e-497f-ac77-134a17fab99f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0317 13:17:21.660057    2276 system_pods.go:61] "kube-controller-manager-pause-471400" [b982ef81-1c85-4fd8-838b-2b8bbf1993d5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0317 13:17:21.660057    2276 system_pods.go:61] "kube-proxy-2w5n2" [d2be3017-491d-427e-982e-7fcdf387b94a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0317 13:17:21.660057    2276 system_pods.go:61] "kube-scheduler-pause-471400" [0a95a12f-a384-429a-93e0-8c27dbbe9c3f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0317 13:17:21.660057    2276 system_pods.go:74] duration metric: took 6.0764ms to wait for pod list to return data ...
	I0317 13:17:21.660057    2276 node_conditions.go:102] verifying NodePressure condition ...
	I0317 13:17:21.670471    2276 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0317 13:17:21.670533    2276 node_conditions.go:123] node cpu capacity is 2
	I0317 13:17:21.670533    2276 node_conditions.go:105] duration metric: took 10.476ms to run NodePressure ...
	I0317 13:17:21.670591    2276 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0317 13:17:20.737824    8032 main.go:141] libmachine: [stdout =====>] : 172.25.31.15
	
	I0317 13:17:20.737824    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:17:20.745017    8032 main.go:141] libmachine: Using SSH client type: native
	I0317 13:17:20.745633    8032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.31.15 22 <nil> <nil>}
	I0317 13:17:20.745633    8032 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0317 13:17:20.879466    8032 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0317 13:17:20.879529    8032 buildroot.go:70] root file system type: tmpfs
	I0317 13:17:20.879529    8032 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0317 13:17:20.879529    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-816300 ).state
	I0317 13:17:23.158531    8032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:17:23.158572    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:17:23.158572    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-816300 ).networkadapters[0]).ipaddresses[0]
	I0317 13:17:22.434282    2276 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0317 13:17:22.441573    2276 kubeadm.go:739] kubelet initialised
	I0317 13:17:22.441720    2276 kubeadm.go:740] duration metric: took 6.4239ms waiting for restarted kubelet to initialise ...
	I0317 13:17:22.441778    2276 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 13:17:22.445549    2276 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-2xpj4" in "kube-system" namespace to be "Ready" ...
	I0317 13:17:24.455661    2276 pod_ready.go:103] pod "coredns-668d6bf9bc-2xpj4" in "kube-system" namespace has status "Ready":"False"
	I0317 13:17:25.782731    8032 main.go:141] libmachine: [stdout =====>] : 172.25.31.15
	
	I0317 13:17:25.782731    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:17:25.788704    8032 main.go:141] libmachine: Using SSH client type: native
	I0317 13:17:25.789625    8032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.31.15 22 <nil> <nil>}
	I0317 13:17:25.789782    8032 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0317 13:17:25.945738    8032 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0317 13:17:25.945738    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-816300 ).state
	I0317 13:17:28.192555    8032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:17:28.193208    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:17:28.193293    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-816300 ).networkadapters[0]).ipaddresses[0]
	I0317 13:17:26.956163    2276 pod_ready.go:103] pod "coredns-668d6bf9bc-2xpj4" in "kube-system" namespace has status "Ready":"False"
	I0317 13:17:29.454463    2276 pod_ready.go:103] pod "coredns-668d6bf9bc-2xpj4" in "kube-system" namespace has status "Ready":"False"
	I0317 13:17:29.955675    2276 pod_ready.go:93] pod "coredns-668d6bf9bc-2xpj4" in "kube-system" namespace has status "Ready":"True"
	I0317 13:17:29.955738    2276 pod_ready.go:82] duration metric: took 7.5100541s for pod "coredns-668d6bf9bc-2xpj4" in "kube-system" namespace to be "Ready" ...
	I0317 13:17:29.955738    2276 pod_ready.go:79] waiting up to 4m0s for pod "etcd-pause-471400" in "kube-system" namespace to be "Ready" ...
	I0317 13:17:29.962561    2276 pod_ready.go:93] pod "etcd-pause-471400" in "kube-system" namespace has status "Ready":"True"
	I0317 13:17:29.962561    2276 pod_ready.go:82] duration metric: took 6.8227ms for pod "etcd-pause-471400" in "kube-system" namespace to be "Ready" ...
	I0317 13:17:29.962561    2276 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-pause-471400" in "kube-system" namespace to be "Ready" ...
	I0317 13:17:30.801332    8032 main.go:141] libmachine: [stdout =====>] : 172.25.31.15
	
	I0317 13:17:30.801598    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:17:30.807053    8032 main.go:141] libmachine: Using SSH client type: native
	I0317 13:17:30.807528    8032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.31.15 22 <nil> <nil>}
	I0317 13:17:30.807528    8032 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0317 13:17:30.945771    8032 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0317 13:17:30.945771    8032 machine.go:96] duration metric: took 45.9583445s to provisionDockerMachine
	I0317 13:17:30.945771    8032 start.go:293] postStartSetup for "kubernetes-upgrade-816300" (driver="hyperv")
	I0317 13:17:30.945771    8032 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0317 13:17:30.957647    8032 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0317 13:17:30.957647    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-816300 ).state
	I0317 13:17:33.148435    8032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:17:33.148613    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:17:33.148613    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-816300 ).networkadapters[0]).ipaddresses[0]
	I0317 13:17:31.972597    2276 pod_ready.go:103] pod "kube-apiserver-pause-471400" in "kube-system" namespace has status "Ready":"False"
	I0317 13:17:33.471689    2276 pod_ready.go:93] pod "kube-apiserver-pause-471400" in "kube-system" namespace has status "Ready":"True"
	I0317 13:17:33.471689    2276 pod_ready.go:82] duration metric: took 3.5090886s for pod "kube-apiserver-pause-471400" in "kube-system" namespace to be "Ready" ...
	I0317 13:17:33.471689    2276 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-pause-471400" in "kube-system" namespace to be "Ready" ...
	I0317 13:17:34.481936    2276 pod_ready.go:93] pod "kube-controller-manager-pause-471400" in "kube-system" namespace has status "Ready":"True"
	I0317 13:17:34.481936    2276 pod_ready.go:82] duration metric: took 1.0102358s for pod "kube-controller-manager-pause-471400" in "kube-system" namespace to be "Ready" ...
	I0317 13:17:34.481936    2276 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-2w5n2" in "kube-system" namespace to be "Ready" ...
	I0317 13:17:34.489466    2276 pod_ready.go:93] pod "kube-proxy-2w5n2" in "kube-system" namespace has status "Ready":"True"
	I0317 13:17:34.489466    2276 pod_ready.go:82] duration metric: took 7.5302ms for pod "kube-proxy-2w5n2" in "kube-system" namespace to be "Ready" ...
	I0317 13:17:34.489466    2276 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-pause-471400" in "kube-system" namespace to be "Ready" ...
	I0317 13:17:34.496080    2276 pod_ready.go:93] pod "kube-scheduler-pause-471400" in "kube-system" namespace has status "Ready":"True"
	I0317 13:17:34.496080    2276 pod_ready.go:82] duration metric: took 6.6138ms for pod "kube-scheduler-pause-471400" in "kube-system" namespace to be "Ready" ...
	I0317 13:17:34.496080    2276 pod_ready.go:39] duration metric: took 12.0541671s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 13:17:34.496614    2276 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0317 13:17:34.517447    2276 ops.go:34] apiserver oom_adj: -16
	I0317 13:17:34.517518    2276 kubeadm.go:597] duration metric: took 22.791273s to restartPrimaryControlPlane
	I0317 13:17:34.517564    2276 kubeadm.go:394] duration metric: took 22.8634642s to StartCluster
	I0317 13:17:34.517564    2276 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:17:34.517708    2276 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0317 13:17:34.519537    2276 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:17:34.521209    2276 start.go:235] Will wait 6m0s for node &{Name: IP:172.25.31.3 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0317 13:17:34.521209    2276 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0317 13:17:34.521209    2276 config.go:182] Loaded profile config "pause-471400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0317 13:17:34.526762    2276 out.go:177] * Verifying Kubernetes components...
	I0317 13:17:34.529267    2276 out.go:177] * Enabled addons: 
	I0317 13:17:34.535267    2276 addons.go:514] duration metric: took 14.0579ms for enable addons: enabled=[]
	I0317 13:17:34.543716    2276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:17:34.855703    2276 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 13:17:34.884172    2276 node_ready.go:35] waiting up to 6m0s for node "pause-471400" to be "Ready" ...
	I0317 13:17:34.888852    2276 node_ready.go:49] node "pause-471400" has status "Ready":"True"
	I0317 13:17:34.888852    2276 node_ready.go:38] duration metric: took 4.68ms for node "pause-471400" to be "Ready" ...
	I0317 13:17:34.888852    2276 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 13:17:34.894067    2276 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-2xpj4" in "kube-system" namespace to be "Ready" ...
	I0317 13:17:34.901365    2276 pod_ready.go:93] pod "coredns-668d6bf9bc-2xpj4" in "kube-system" namespace has status "Ready":"True"
	I0317 13:17:34.901365    2276 pod_ready.go:82] duration metric: took 7.2982ms for pod "coredns-668d6bf9bc-2xpj4" in "kube-system" namespace to be "Ready" ...
	I0317 13:17:34.901365    2276 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-471400" in "kube-system" namespace to be "Ready" ...
	I0317 13:17:35.153071    2276 pod_ready.go:93] pod "etcd-pause-471400" in "kube-system" namespace has status "Ready":"True"
	I0317 13:17:35.153071    2276 pod_ready.go:82] duration metric: took 251.7026ms for pod "etcd-pause-471400" in "kube-system" namespace to be "Ready" ...
	I0317 13:17:35.153071    2276 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-471400" in "kube-system" namespace to be "Ready" ...
	I0317 13:17:35.553894    2276 pod_ready.go:93] pod "kube-apiserver-pause-471400" in "kube-system" namespace has status "Ready":"True"
	I0317 13:17:35.553894    2276 pod_ready.go:82] duration metric: took 400.8191ms for pod "kube-apiserver-pause-471400" in "kube-system" namespace to be "Ready" ...
	I0317 13:17:35.553894    2276 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-471400" in "kube-system" namespace to be "Ready" ...
	I0317 13:17:35.952366    2276 pod_ready.go:93] pod "kube-controller-manager-pause-471400" in "kube-system" namespace has status "Ready":"True"
	I0317 13:17:35.952366    2276 pod_ready.go:82] duration metric: took 398.4672ms for pod "kube-controller-manager-pause-471400" in "kube-system" namespace to be "Ready" ...
	I0317 13:17:35.952447    2276 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2w5n2" in "kube-system" namespace to be "Ready" ...
	I0317 13:17:36.357494    2276 pod_ready.go:93] pod "kube-proxy-2w5n2" in "kube-system" namespace has status "Ready":"True"
	I0317 13:17:36.357582    2276 pod_ready.go:82] duration metric: took 405.1305ms for pod "kube-proxy-2w5n2" in "kube-system" namespace to be "Ready" ...
	I0317 13:17:36.357582    2276 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-471400" in "kube-system" namespace to be "Ready" ...
	I0317 13:17:36.752198    2276 pod_ready.go:93] pod "kube-scheduler-pause-471400" in "kube-system" namespace has status "Ready":"True"
	I0317 13:17:36.752327    2276 pod_ready.go:82] duration metric: took 394.7403ms for pod "kube-scheduler-pause-471400" in "kube-system" namespace to be "Ready" ...
	I0317 13:17:36.752327    2276 pod_ready.go:39] duration metric: took 1.8634543s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 13:17:36.752327    2276 api_server.go:52] waiting for apiserver process to appear ...
	I0317 13:17:36.764599    2276 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:17:36.800961    2276 api_server.go:72] duration metric: took 2.2797264s to wait for apiserver process to appear ...
	I0317 13:17:36.801100    2276 api_server.go:88] waiting for apiserver healthz status ...
	I0317 13:17:36.801100    2276 api_server.go:253] Checking apiserver healthz at https://172.25.31.3:8443/healthz ...
	I0317 13:17:36.808627    2276 api_server.go:279] https://172.25.31.3:8443/healthz returned 200:
	ok
	I0317 13:17:36.811190    2276 api_server.go:141] control plane version: v1.32.2
	I0317 13:17:36.811234    2276 api_server.go:131] duration metric: took 10.1339ms to wait for apiserver health ...
	I0317 13:17:36.811234    2276 system_pods.go:43] waiting for kube-system pods to appear ...
	I0317 13:17:36.952088    2276 system_pods.go:59] 6 kube-system pods found
	I0317 13:17:36.952088    2276 system_pods.go:61] "coredns-668d6bf9bc-2xpj4" [704a1878-5d2f-4871-98ac-ced7ddfbc684] Running
	I0317 13:17:36.952088    2276 system_pods.go:61] "etcd-pause-471400" [98e4a9fc-1ef0-4a40-a394-634314ddd363] Running
	I0317 13:17:36.952088    2276 system_pods.go:61] "kube-apiserver-pause-471400" [032360c0-bb5e-497f-ac77-134a17fab99f] Running
	I0317 13:17:36.952088    2276 system_pods.go:61] "kube-controller-manager-pause-471400" [b982ef81-1c85-4fd8-838b-2b8bbf1993d5] Running
	I0317 13:17:36.952088    2276 system_pods.go:61] "kube-proxy-2w5n2" [d2be3017-491d-427e-982e-7fcdf387b94a] Running
	I0317 13:17:36.952088    2276 system_pods.go:61] "kube-scheduler-pause-471400" [0a95a12f-a384-429a-93e0-8c27dbbe9c3f] Running
	I0317 13:17:36.952088    2276 system_pods.go:74] duration metric: took 140.8525ms to wait for pod list to return data ...
	I0317 13:17:36.952088    2276 default_sa.go:34] waiting for default service account to be created ...
	I0317 13:17:37.153579    2276 default_sa.go:45] found service account: "default"
	I0317 13:17:37.153743    2276 default_sa.go:55] duration metric: took 201.6532ms for default service account to be created ...
	I0317 13:17:37.153743    2276 system_pods.go:116] waiting for k8s-apps to be running ...
	I0317 13:17:37.354555    2276 system_pods.go:86] 6 kube-system pods found
	I0317 13:17:37.354555    2276 system_pods.go:89] "coredns-668d6bf9bc-2xpj4" [704a1878-5d2f-4871-98ac-ced7ddfbc684] Running
	I0317 13:17:37.354555    2276 system_pods.go:89] "etcd-pause-471400" [98e4a9fc-1ef0-4a40-a394-634314ddd363] Running
	I0317 13:17:37.354555    2276 system_pods.go:89] "kube-apiserver-pause-471400" [032360c0-bb5e-497f-ac77-134a17fab99f] Running
	I0317 13:17:37.354555    2276 system_pods.go:89] "kube-controller-manager-pause-471400" [b982ef81-1c85-4fd8-838b-2b8bbf1993d5] Running
	I0317 13:17:37.354555    2276 system_pods.go:89] "kube-proxy-2w5n2" [d2be3017-491d-427e-982e-7fcdf387b94a] Running
	I0317 13:17:37.354555    2276 system_pods.go:89] "kube-scheduler-pause-471400" [0a95a12f-a384-429a-93e0-8c27dbbe9c3f] Running
	I0317 13:17:37.354555    2276 system_pods.go:126] duration metric: took 200.8097ms to wait for k8s-apps to be running ...
	I0317 13:17:37.354555    2276 system_svc.go:44] waiting for kubelet service to be running ....
	I0317 13:17:37.369871    2276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0317 13:17:37.397546    2276 system_svc.go:56] duration metric: took 42.9902ms WaitForService to wait for kubelet
	I0317 13:17:37.397692    2276 kubeadm.go:582] duration metric: took 2.8764504s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0317 13:17:37.397692    2276 node_conditions.go:102] verifying NodePressure condition ...
	I0317 13:17:37.551807    2276 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0317 13:17:37.551807    2276 node_conditions.go:123] node cpu capacity is 2
	I0317 13:17:37.551807    2276 node_conditions.go:105] duration metric: took 154.1133ms to run NodePressure ...
	I0317 13:17:37.551807    2276 start.go:241] waiting for startup goroutines ...
	I0317 13:17:37.551807    2276 start.go:246] waiting for cluster config update ...
	I0317 13:17:37.551807    2276 start.go:255] writing updated cluster config ...
	I0317 13:17:37.567501    2276 ssh_runner.go:195] Run: rm -f paused
	I0317 13:17:37.733467    2276 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0317 13:17:37.745874    2276 out.go:177] * Done! kubectl is now configured to use "pause-471400" cluster and "default" namespace by default
	I0317 13:17:35.801350    8032 main.go:141] libmachine: [stdout =====>] : 172.25.31.15
	
	I0317 13:17:35.801350    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:17:35.802069    8032 sshutil.go:53] new ssh client: &{IP:172.25.31.15 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\kubernetes-upgrade-816300\id_rsa Username:docker}
	I0317 13:17:35.910924    8032 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9531678s)
	I0317 13:17:35.922163    8032 ssh_runner.go:195] Run: cat /etc/os-release
	I0317 13:17:35.928978    8032 info.go:137] Remote host: Buildroot 2023.02.9
	I0317 13:17:35.928978    8032 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0317 13:17:35.929473    8032 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0317 13:17:35.930571    8032 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem -> 89402.pem in /etc/ssl/certs
	I0317 13:17:35.942242    8032 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0317 13:17:35.970491    8032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem --> /etc/ssl/certs/89402.pem (1708 bytes)
	I0317 13:17:36.018429    8032 start.go:296] duration metric: took 5.0726005s for postStartSetup
	I0317 13:17:36.018553    8032 fix.go:56] duration metric: took 53.2698313s for fixHost
	I0317 13:17:36.018614    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-816300 ).state
	I0317 13:17:38.392182    8032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:17:38.392182    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:17:38.393076    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-816300 ).networkadapters[0]).ipaddresses[0]
	I0317 13:17:41.217896    8032 main.go:141] libmachine: [stdout =====>] : 172.25.31.15
	
	I0317 13:17:41.217896    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:17:41.224130    8032 main.go:141] libmachine: Using SSH client type: native
	I0317 13:17:41.224855    8032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.31.15 22 <nil> <nil>}
	I0317 13:17:41.224923    8032 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0317 13:17:41.362553    8032 main.go:141] libmachine: SSH cmd err, output: <nil>: 1742217461.386042620
	
	I0317 13:17:41.362553    8032 fix.go:216] guest clock: 1742217461.386042620
	I0317 13:17:41.362553    8032 fix.go:229] Guest: 2025-03-17 13:17:41.38604262 +0000 UTC Remote: 2025-03-17 13:17:36.0185533 +0000 UTC m=+316.796178401 (delta=5.36748932s)
	I0317 13:17:41.362553    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-816300 ).state
	I0317 13:17:43.686366    8032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:17:43.686366    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:17:43.686716    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-816300 ).networkadapters[0]).ipaddresses[0]
	I0317 13:17:46.565761    7220 start.go:364] duration metric: took 2m26.9697056s to acquireMachinesLock for "docker-flags-664100"
	I0317 13:17:46.566068    7220 start.go:93] Provisioning new machine with config: &{Name:docker-flags-664100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.32.2 ClusterName:docker-flags-664100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0317 13:17:46.566290    7220 start.go:125] createHost starting for "" (driver="hyperv")
	I0317 13:17:46.570469    7220 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0317 13:17:46.571031    7220 start.go:159] libmachine.API.Create for "docker-flags-664100" (driver="hyperv")
	I0317 13:17:46.571136    7220 client.go:168] LocalClient.Create starting
	I0317 13:17:46.572389    7220 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0317 13:17:46.572450    7220 main.go:141] libmachine: Decoding PEM data...
	I0317 13:17:46.572450    7220 main.go:141] libmachine: Parsing certificate...
	I0317 13:17:46.572450    7220 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0317 13:17:46.573116    7220 main.go:141] libmachine: Decoding PEM data...
	I0317 13:17:46.573116    7220 main.go:141] libmachine: Parsing certificate...
	I0317 13:17:46.573116    7220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0317 13:17:48.746053    7220 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0317 13:17:48.746511    7220 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:17:48.746599    7220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0317 13:17:46.407812    8032 main.go:141] libmachine: [stdout =====>] : 172.25.31.15
	
	I0317 13:17:46.407812    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:17:46.413158    8032 main.go:141] libmachine: Using SSH client type: native
	I0317 13:17:46.413798    8032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11f7d00] 0x11fa840 <nil>  [] 0s} 172.25.31.15 22 <nil> <nil>}
	I0317 13:17:46.413798    8032 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1742217461
	I0317 13:17:46.565085    8032 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Mar 17 13:17:41 UTC 2025
	
	I0317 13:17:46.565182    8032 fix.go:236] clock set: Mon Mar 17 13:17:41 UTC 2025
	 (err=<nil>)
	I0317 13:17:46.565182    8032 start.go:83] releasing machines lock for "kubernetes-upgrade-816300", held for 1m3.8166881s
	I0317 13:17:46.565539    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-816300 ).state
	I0317 13:17:49.002566    8032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:17:49.002566    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:17:49.002648    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-816300 ).networkadapters[0]).ipaddresses[0]
	I0317 13:17:50.720223    7220 main.go:141] libmachine: [stdout =====>] : False
	
	I0317 13:17:50.720626    7220 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:17:50.720734    7220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0317 13:17:52.423871    7220 main.go:141] libmachine: [stdout =====>] : True
	
	I0317 13:17:52.424086    7220 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:17:52.424156    7220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0317 13:17:51.880887    8032 main.go:141] libmachine: [stdout =====>] : 172.25.31.15
	
	I0317 13:17:51.880887    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:17:51.885854    8032 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0317 13:17:51.885854    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-816300 ).state
	I0317 13:17:51.898859    8032 ssh_runner.go:195] Run: cat /version.json
	I0317 13:17:51.899868    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-816300 ).state
	I0317 13:17:54.421191    8032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:17:54.421583    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:17:54.421728    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-816300 ).networkadapters[0]).ipaddresses[0]
	I0317 13:17:56.909392    7220 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0317 13:17:56.909529    7220 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:17:56.913267    7220 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0317 13:17:57.442322    7220 main.go:141] libmachine: Creating SSH key...
	I0317 13:17:57.710334    7220 main.go:141] libmachine: Creating VM...
	I0317 13:17:57.710334    7220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0317 13:17:54.438861    8032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:17:54.438861    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:17:54.438861    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-816300 ).networkadapters[0]).ipaddresses[0]
	I0317 13:17:57.332710    8032 main.go:141] libmachine: [stdout =====>] : 172.25.31.15
	
	I0317 13:17:57.332710    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:17:57.333836    8032 sshutil.go:53] new ssh client: &{IP:172.25.31.15 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\kubernetes-upgrade-816300\id_rsa Username:docker}
	I0317 13:17:57.369512    8032 main.go:141] libmachine: [stdout =====>] : 172.25.31.15
	
	I0317 13:17:57.369623    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:17:57.369715    8032 sshutil.go:53] new ssh client: &{IP:172.25.31.15 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\kubernetes-upgrade-816300\id_rsa Username:docker}
	I0317 13:17:57.447051    8032 ssh_runner.go:235] Completed: cat /version.json: (5.5481296s)
	I0317 13:17:57.460709    8032 ssh_runner.go:195] Run: systemctl --version
	I0317 13:17:57.466105    8032 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.5801146s)
	W0317 13:17:57.466172    8032 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0317 13:17:57.490471    8032 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0317 13:17:57.500781    8032 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0317 13:17:57.513044    8032 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0317 13:17:57.546333    8032 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	W0317 13:17:57.579874    8032 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0317 13:17:57.579874    8032 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0317 13:17:57.585318    8032 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0317 13:17:57.585318    8032 start.go:495] detecting cgroup driver to use...
	I0317 13:17:57.585318    8032 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0317 13:17:57.640370    8032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0317 13:17:57.675278    8032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0317 13:17:57.695285    8032 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0317 13:17:57.705286    8032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0317 13:17:57.742030    8032 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0317 13:17:57.777181    8032 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0317 13:17:57.820814    8032 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0317 13:17:57.857280    8032 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0317 13:17:57.901296    8032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0317 13:17:57.942785    8032 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0317 13:17:57.982121    8032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0317 13:17:58.017874    8032 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0317 13:17:58.054040    8032 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0317 13:17:58.095195    8032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:17:58.413025    8032 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0317 13:17:58.451641    8032 start.go:495] detecting cgroup driver to use...
	I0317 13:17:58.465652    8032 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0317 13:17:58.529076    8032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0317 13:17:58.570640    8032 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0317 13:17:58.626538    8032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0317 13:17:58.662958    8032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0317 13:17:58.690221    8032 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0317 13:17:58.738088    8032 ssh_runner.go:195] Run: which cri-dockerd
	I0317 13:17:58.758481    8032 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0317 13:17:58.779603    8032 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0317 13:17:58.833624    8032 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0317 13:17:59.146106    8032 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0317 13:18:01.166377    7220 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0317 13:18:01.166435    7220 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:18:01.166708    7220 main.go:141] libmachine: Using switch "Default Switch"
	I0317 13:18:01.166827    7220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0317 13:18:02.999476    7220 main.go:141] libmachine: [stdout =====>] : True
	
	I0317 13:18:02.999476    7220 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:18:03.000256    7220 main.go:141] libmachine: Creating VHD
	I0317 13:18:03.000256    7220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\docker-flags-664100\fixed.vhd' -SizeBytes 10MB -Fixed
	I0317 13:17:59.431162    8032 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0317 13:17:59.431455    8032 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0317 13:17:59.480093    8032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:17:59.800249    8032 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0317 13:18:07.000287    7220 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\docker-flags-664100\fixed.
	                          vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 97CF4DDA-928C-42E4-BCB3-D3451FC3FCD8
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0317 13:18:07.001272    7220 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:18:07.001272    7220 main.go:141] libmachine: Writing magic tar header
	I0317 13:18:07.001415    7220 main.go:141] libmachine: Writing SSH key tar header
	I0317 13:18:07.014756    7220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\docker-flags-664100\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\docker-flags-664100\disk.vhd' -VHDType Dynamic -DeleteSource
	I0317 13:18:10.351982    7220 main.go:141] libmachine: [stdout =====>] : 
	I0317 13:18:10.352821    7220 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:18:10.352915    7220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\docker-flags-664100\disk.vhd' -SizeBytes 20000MB
	I0317 13:18:13.066971    7220 main.go:141] libmachine: [stdout =====>] : 
	I0317 13:18:13.067124    7220 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:18:13.067206    7220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM docker-flags-664100 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\docker-flags-664100' -SwitchName 'Default Switch' -MemoryStartupBytes 2048MB
	I0317 13:18:12.877539    8032 ssh_runner.go:235] Completed: sudo systemctl restart docker: (13.0770378s)
	I0317 13:18:12.896017    8032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0317 13:18:12.955589    8032 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0317 13:18:13.012159    8032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0317 13:18:13.072903    8032 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0317 13:18:13.340896    8032 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0317 13:18:13.573608    8032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:18:13.829786    8032 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0317 13:18:13.875203    8032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0317 13:18:13.919615    8032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:18:14.187751    8032 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0317 13:18:14.329293    8032 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0317 13:18:14.342413    8032 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0317 13:18:14.354711    8032 start.go:563] Will wait 60s for crictl version
	I0317 13:18:14.366874    8032 ssh_runner.go:195] Run: which crictl
	I0317 13:18:14.383360    8032 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0317 13:18:14.442622    8032 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0317 13:18:14.452246    8032 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0317 13:18:14.501000    8032 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0317 13:18:17.267362    7220 main.go:141] libmachine: [stdout =====>] : 
	Name                State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----                ----- ----------- ----------------- ------   ------             -------
	docker-flags-664100 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0317 13:18:17.267362    7220 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:18:17.267362    7220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName docker-flags-664100 -DynamicMemoryEnabled $false
	I0317 13:18:14.556142    8032 out.go:235] * Preparing Kubernetes v1.32.2 on Docker 27.4.0 ...
	I0317 13:18:14.557103    8032 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0317 13:18:14.561113    8032 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0317 13:18:14.561113    8032 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0317 13:18:14.561113    8032 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0317 13:18:14.561113    8032 ip.go:211] Found interface: {Index:13 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:4b:84:d5 Flags:up|broadcast|multicast|running}
	I0317 13:18:14.565200    8032 ip.go:214] interface addr: fe80::f0c7:c31c:6237:ef35/64
	I0317 13:18:14.565340    8032 ip.go:214] interface addr: 172.25.16.1/20
	I0317 13:18:14.577182    8032 ssh_runner.go:195] Run: grep 172.25.16.1	host.minikube.internal$ /etc/hosts
	I0317 13:18:14.585243    8032 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-816300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ku
bernetes-upgrade-816300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.31.15 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0317 13:18:14.585517    8032 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0317 13:18:14.594567    8032 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0317 13:18:14.627206    8032 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.32.2
	registry.k8s.io/kube-proxy:v1.32.2
	registry.k8s.io/kube-controller-manager:v1.32.2
	registry.k8s.io/kube-scheduler:v1.32.2
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-proxy:v1.20.0
	k8s.gcr.io/kube-scheduler:v1.20.0
	k8s.gcr.io/kube-controller-manager:v1.20.0
	k8s.gcr.io/kube-apiserver:v1.20.0
	k8s.gcr.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	k8s.gcr.io/pause:3.2
	
	-- /stdout --
	I0317 13:18:14.627276    8032 docker.go:619] Images already preloaded, skipping extraction
	I0317 13:18:14.636447    8032 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0317 13:18:14.682993    8032 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.32.2
	registry.k8s.io/kube-controller-manager:v1.32.2
	registry.k8s.io/kube-proxy:v1.32.2
	registry.k8s.io/kube-scheduler:v1.32.2
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-proxy:v1.20.0
	k8s.gcr.io/kube-controller-manager:v1.20.0
	k8s.gcr.io/kube-scheduler:v1.20.0
	k8s.gcr.io/kube-apiserver:v1.20.0
	k8s.gcr.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	k8s.gcr.io/pause:3.2
	
	-- /stdout --
	I0317 13:18:14.683057    8032 cache_images.go:84] Images are preloaded, skipping loading
	I0317 13:18:14.683115    8032 kubeadm.go:934] updating node { 172.25.31.15 8443 v1.32.2 docker true true} ...
	I0317 13:18:14.683480    8032 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-816300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.25.31.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:kubernetes-upgrade-816300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0317 13:18:14.695220    8032 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0317 13:18:14.771989    8032 cni.go:84] Creating CNI manager for ""
	I0317 13:18:14.772063    8032 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0317 13:18:14.772150    8032 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0317 13:18:14.772150    8032 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.25.31.15 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-816300 NodeName:kubernetes-upgrade-816300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.25.31.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.25.31.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0317 13:18:14.772565    8032 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.25.31.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "kubernetes-upgrade-816300"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "172.25.31.15"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.25.31.15"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0317 13:18:14.784343    8032 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0317 13:18:14.803518    8032 binaries.go:44] Found k8s binaries, skipping transfer
	I0317 13:18:14.816380    8032 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0317 13:18:14.836728    8032 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0317 13:18:14.869634    8032 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0317 13:18:14.906219    8032 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2303 bytes)
	I0317 13:18:14.954427    8032 ssh_runner.go:195] Run: grep 172.25.31.15	control-plane.minikube.internal$ /etc/hosts
	I0317 13:18:14.977053    8032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:18:15.257624    8032 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 13:18:15.303878    8032 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\kubernetes-upgrade-816300 for IP: 172.25.31.15
	I0317 13:18:15.303985    8032 certs.go:194] generating shared ca certs ...
	I0317 13:18:15.303985    8032 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:18:15.304844    8032 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0317 13:18:15.305276    8032 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0317 13:18:15.305532    8032 certs.go:256] generating profile certs ...
	I0317 13:18:15.305974    8032 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\kubernetes-upgrade-816300\client.key
	I0317 13:18:15.307469    8032 certs.go:359] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\kubernetes-upgrade-816300\apiserver.key.e431048a
	I0317 13:18:15.308801    8032 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\kubernetes-upgrade-816300\proxy-client.key
	I0317 13:18:15.311819    8032 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8940.pem (1338 bytes)
	W0317 13:18:15.311819    8032 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8940_empty.pem, impossibly tiny 0 bytes
	I0317 13:18:15.311819    8032 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0317 13:18:15.312882    8032 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0317 13:18:15.313217    8032 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0317 13:18:15.313217    8032 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0317 13:18:15.314072    8032 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem (1708 bytes)
	I0317 13:18:15.316827    8032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0317 13:18:15.400199    8032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0317 13:18:15.462425    8032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0317 13:18:15.526167    8032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0317 13:18:15.597174    8032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\kubernetes-upgrade-816300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0317 13:18:15.661787    8032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\kubernetes-upgrade-816300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0317 13:18:15.735925    8032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\kubernetes-upgrade-816300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0317 13:18:15.798431    8032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\kubernetes-upgrade-816300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0317 13:18:15.855651    8032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\89402.pem --> /usr/share/ca-certificates/89402.pem (1708 bytes)
	I0317 13:18:15.924382    8032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0317 13:18:15.994691    8032 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8940.pem --> /usr/share/ca-certificates/8940.pem (1338 bytes)
	I0317 13:18:16.114460    8032 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0317 13:18:16.210251    8032 ssh_runner.go:195] Run: openssl version
	I0317 13:18:16.233257    8032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/89402.pem && ln -fs /usr/share/ca-certificates/89402.pem /etc/ssl/certs/89402.pem"
	I0317 13:18:16.328252    8032 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/89402.pem
	I0317 13:18:16.348691    8032 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 17 10:46 /usr/share/ca-certificates/89402.pem
	I0317 13:18:16.361948    8032 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/89402.pem
	I0317 13:18:16.382921    8032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/89402.pem /etc/ssl/certs/3ec20f2e.0"
	I0317 13:18:16.425587    8032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0317 13:18:16.496799    8032 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:18:16.509613    8032 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 17 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:18:16.530149    8032 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0317 13:18:16.553931    8032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0317 13:18:16.597941    8032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8940.pem && ln -fs /usr/share/ca-certificates/8940.pem /etc/ssl/certs/8940.pem"
	I0317 13:18:16.647365    8032 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8940.pem
	I0317 13:18:16.655751    8032 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 17 10:46 /usr/share/ca-certificates/8940.pem
	I0317 13:18:16.671372    8032 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8940.pem
	I0317 13:18:16.699097    8032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8940.pem /etc/ssl/certs/51391683.0"
	I0317 13:18:16.736011    8032 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0317 13:18:16.757205    8032 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0317 13:18:16.783467    8032 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0317 13:18:16.808086    8032 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0317 13:18:16.833166    8032 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0317 13:18:16.870359    8032 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0317 13:18:16.922298    8032 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0317 13:18:16.938398    8032 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-816300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:kuber
netes-upgrade-816300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.25.31.15 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:doc
ker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 13:18:16.950342    8032 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0317 13:18:17.004467    8032 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0317 13:18:17.077298    8032 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0317 13:18:17.077360    8032 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0317 13:18:17.090135    8032 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0317 13:18:17.140276    8032 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0317 13:18:17.142058    8032 kubeconfig.go:125] found "kubernetes-upgrade-816300" server: "https://172.25.31.15:8443"
	I0317 13:18:17.144483    8032 kapi.go:59] client config for kubernetes-upgrade-816300: &rest.Config{Host:"https://172.25.31.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\kubernetes-upgrade-816300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\kubernetes-upgrade-816300\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil),
KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2e292e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0317 13:18:17.147448    8032 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0317 13:18:17.147448    8032 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0317 13:18:17.147448    8032 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0317 13:18:17.147539    8032 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0317 13:18:17.159042    8032 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0317 13:18:17.190122    8032 kubeadm.go:630] The running cluster does not require reconfiguration: 172.25.31.15
	I0317 13:18:17.190183    8032 kubeadm.go:1160] stopping kube-system containers ...
	I0317 13:18:17.201190    8032 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0317 13:18:17.351159    8032 docker.go:483] Stopping containers: [db863eea0649 eb0e141e3644 c2a5fddd2f2a bd5d9054ca0b 3508290bdfea 01c8f83a4fd4 6af969732bf3 3dbb8ea1a155 1fb48ef6e80d 713e8fb828e6 9ddab5c27dbb 68768033f15d d7acb494b3ff 85d7e55ce335 702e9352f569 5327df366234 eb495adde413 351f7d8503a5 92bf9e018585 cedc91461303 a1006dd94a28 17cd71ec5f1e c3733c574b1a 43527f4438e5 9944f7f82ecf d62da149d7c0 3e461d162750 b1497d98354d c0d14d1532b2]
	I0317 13:18:17.362464    8032 ssh_runner.go:195] Run: docker stop db863eea0649 eb0e141e3644 c2a5fddd2f2a bd5d9054ca0b 3508290bdfea 01c8f83a4fd4 6af969732bf3 3dbb8ea1a155 1fb48ef6e80d 713e8fb828e6 9ddab5c27dbb 68768033f15d d7acb494b3ff 85d7e55ce335 702e9352f569 5327df366234 eb495adde413 351f7d8503a5 92bf9e018585 cedc91461303 a1006dd94a28 17cd71ec5f1e c3733c574b1a 43527f4438e5 9944f7f82ecf d62da149d7c0 3e461d162750 b1497d98354d c0d14d1532b2
	I0317 13:18:19.851151    7220 main.go:141] libmachine: [stdout =====>] : 
	I0317 13:18:19.851151    7220 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:18:19.852074    7220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor docker-flags-664100 -Count 2
	I0317 13:18:22.311872    7220 main.go:141] libmachine: [stdout =====>] : 
	I0317 13:18:22.312000    7220 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:18:22.312142    7220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName docker-flags-664100 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\docker-flags-664100\boot2docker.iso'
	I0317 13:18:20.229718    8032 ssh_runner.go:235] Completed: docker stop db863eea0649 eb0e141e3644 c2a5fddd2f2a bd5d9054ca0b 3508290bdfea 01c8f83a4fd4 6af969732bf3 3dbb8ea1a155 1fb48ef6e80d 713e8fb828e6 9ddab5c27dbb 68768033f15d d7acb494b3ff 85d7e55ce335 702e9352f569 5327df366234 eb495adde413 351f7d8503a5 92bf9e018585 cedc91461303 a1006dd94a28 17cd71ec5f1e c3733c574b1a 43527f4438e5 9944f7f82ecf d62da149d7c0 3e461d162750 b1497d98354d c0d14d1532b2: (2.8671198s)
	I0317 13:18:20.242442    8032 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0317 13:18:20.350435    8032 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0317 13:18:20.395005    8032 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5651 Mar 17 13:11 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Mar 17 13:11 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5759 Mar 17 13:11 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Mar 17 13:11 /etc/kubernetes/scheduler.conf
	
	I0317 13:18:20.407204    8032 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0317 13:18:20.447247    8032 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0317 13:18:20.484731    8032 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0317 13:18:20.505178    8032 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0317 13:18:20.516415    8032 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0317 13:18:20.545429    8032 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0317 13:18:20.563412    8032 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0317 13:18:20.576422    8032 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0317 13:18:20.615183    8032 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0317 13:18:20.640554    8032 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0317 13:18:20.764135    8032 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0317 13:18:22.763362    8032 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.9990847s)
	I0317 13:18:22.763479    8032 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0317 13:18:23.170121    8032 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0317 13:18:23.260245    8032 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0317 13:18:23.356697    8032 api_server.go:52] waiting for apiserver process to appear ...
	I0317 13:18:23.372056    8032 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:18:23.870789    8032 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:18:24.371595    8032 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:18:25.153931    7220 main.go:141] libmachine: [stdout =====>] : 
	I0317 13:18:25.153931    7220 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:18:25.153931    7220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName docker-flags-664100 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\docker-flags-664100\disk.vhd'
	I0317 13:18:28.113550    7220 main.go:141] libmachine: [stdout =====>] : 
	I0317 13:18:28.114381    7220 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:18:28.114381    7220 main.go:141] libmachine: Starting VM...
	I0317 13:18:28.114523    7220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM docker-flags-664100
	I0317 13:18:24.870734    8032 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:18:24.902888    8032 api_server.go:72] duration metric: took 1.5461735s to wait for apiserver process to appear ...
	I0317 13:18:24.902888    8032 api_server.go:88] waiting for apiserver healthz status ...
	I0317 13:18:24.903007    8032 api_server.go:253] Checking apiserver healthz at https://172.25.31.15:8443/healthz ...
	I0317 13:18:27.608179    8032 api_server.go:279] https://172.25.31.15:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0317 13:18:27.608272    8032 api_server.go:103] status: https://172.25.31.15:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0317 13:18:27.608272    8032 api_server.go:253] Checking apiserver healthz at https://172.25.31.15:8443/healthz ...
	I0317 13:18:27.676865    8032 api_server.go:279] https://172.25.31.15:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0317 13:18:27.676865    8032 api_server.go:103] status: https://172.25.31.15:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0317 13:18:27.904153    8032 api_server.go:253] Checking apiserver healthz at https://172.25.31.15:8443/healthz ...
	I0317 13:18:27.915014    8032 api_server.go:279] https://172.25.31.15:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0317 13:18:27.915153    8032 api_server.go:103] status: https://172.25.31.15:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0317 13:18:28.403212    8032 api_server.go:253] Checking apiserver healthz at https://172.25.31.15:8443/healthz ...
	I0317 13:18:28.415853    8032 api_server.go:279] https://172.25.31.15:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0317 13:18:28.415853    8032 api_server.go:103] status: https://172.25.31.15:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0317 13:18:28.904014    8032 api_server.go:253] Checking apiserver healthz at https://172.25.31.15:8443/healthz ...
	I0317 13:18:28.915993    8032 api_server.go:279] https://172.25.31.15:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0317 13:18:28.915993    8032 api_server.go:103] status: https://172.25.31.15:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0317 13:18:29.403701    8032 api_server.go:253] Checking apiserver healthz at https://172.25.31.15:8443/healthz ...
	I0317 13:18:29.413014    8032 api_server.go:279] https://172.25.31.15:8443/healthz returned 200:
	ok
	I0317 13:18:29.429258    8032 api_server.go:141] control plane version: v1.32.2
	I0317 13:18:29.429319    8032 api_server.go:131] duration metric: took 4.5263809s to wait for apiserver health ...
	I0317 13:18:29.429382    8032 cni.go:84] Creating CNI manager for ""
	I0317 13:18:29.429442    8032 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0317 13:18:29.434238    8032 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0317 13:18:29.457469    8032 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0317 13:18:29.485655    8032 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0317 13:18:29.524367    8032 system_pods.go:43] waiting for kube-system pods to appear ...
	I0317 13:18:29.557264    8032 system_pods.go:59] 7 kube-system pods found
	I0317 13:18:29.557818    8032 system_pods.go:61] "coredns-668d6bf9bc-jmcjq" [e0a439f6-b184-4231-9101-bf27c2c2e10d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 13:18:29.557818    8032 system_pods.go:61] "etcd-kubernetes-upgrade-816300" [c793b201-7c0f-4805-ad4a-926ddffbc14e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0317 13:18:29.557897    8032 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-816300" [bbf40bda-7e5a-401b-ba4d-0f7cc2197e62] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0317 13:18:29.557897    8032 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-816300" [44ddb5b4-c593-47c9-9981-f3d213aee511] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0317 13:18:29.557897    8032 system_pods.go:61] "kube-proxy-g6v8v" [c9cf7a98-44d0-42fe-a883-020c690b942c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0317 13:18:29.557897    8032 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-816300" [92050c95-6904-43bd-b95f-bb4688cdce6f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0317 13:18:29.557897    8032 system_pods.go:61] "storage-provisioner" [5d66bfa6-10da-42f3-bc95-2b551038aed8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0317 13:18:29.557897    8032 system_pods.go:74] duration metric: took 33.3997ms to wait for pod list to return data ...
	I0317 13:18:29.557897    8032 node_conditions.go:102] verifying NodePressure condition ...
	I0317 13:18:29.570062    8032 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0317 13:18:29.570118    8032 node_conditions.go:123] node cpu capacity is 2
	I0317 13:18:29.570190    8032 node_conditions.go:105] duration metric: took 12.2934ms to run NodePressure ...
	I0317 13:18:29.570246    8032 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0317 13:18:29.982271    8032 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0317 13:18:30.009192    8032 ops.go:34] apiserver oom_adj: -16
	I0317 13:18:30.009242    8032 kubeadm.go:597] duration metric: took 12.9317383s to restartPrimaryControlPlane
	I0317 13:18:30.009242    8032 kubeadm.go:394] duration metric: took 13.0706983s to StartCluster
	I0317 13:18:30.009242    8032 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:18:30.009488    8032 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0317 13:18:30.012184    8032 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 13:18:30.013716    8032 start.go:235] Will wait 6m0s for node &{Name: IP:172.25.31.15 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0317 13:18:30.013716    8032 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0317 13:18:30.013716    8032 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-816300"
	I0317 13:18:30.013716    8032 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-816300"
	I0317 13:18:30.013716    8032 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-816300"
	I0317 13:18:30.013716    8032 addons.go:238] Setting addon storage-provisioner=true in "kubernetes-upgrade-816300"
	W0317 13:18:30.013716    8032 addons.go:247] addon storage-provisioner should already be in state true
	I0317 13:18:30.014324    8032 config.go:182] Loaded profile config "kubernetes-upgrade-816300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0317 13:18:30.014324    8032 host.go:66] Checking if "kubernetes-upgrade-816300" exists ...
	I0317 13:18:30.015890    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-816300 ).state
	I0317 13:18:30.016425    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-816300 ).state
	I0317 13:18:30.018458    8032 out.go:177] * Verifying Kubernetes components...
	I0317 13:18:30.041962    8032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 13:18:30.475433    8032 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 13:18:30.520050    8032 api_server.go:52] waiting for apiserver process to appear ...
	I0317 13:18:30.541103    8032 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:18:30.586233    8032 api_server.go:72] duration metric: took 572.4494ms to wait for apiserver process to appear ...
	I0317 13:18:30.586346    8032 api_server.go:88] waiting for apiserver healthz status ...
	I0317 13:18:30.586346    8032 api_server.go:253] Checking apiserver healthz at https://172.25.31.15:8443/healthz ...
	I0317 13:18:30.598383    8032 api_server.go:279] https://172.25.31.15:8443/healthz returned 200:
	ok
	I0317 13:18:30.603239    8032 api_server.go:141] control plane version: v1.32.2
	I0317 13:18:30.603484    8032 api_server.go:131] duration metric: took 17.1379ms to wait for apiserver health ...
	I0317 13:18:30.603560    8032 system_pods.go:43] waiting for kube-system pods to appear ...
	I0317 13:18:30.609373    8032 system_pods.go:59] 7 kube-system pods found
	I0317 13:18:30.609373    8032 system_pods.go:61] "coredns-668d6bf9bc-jmcjq" [e0a439f6-b184-4231-9101-bf27c2c2e10d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 13:18:30.609373    8032 system_pods.go:61] "etcd-kubernetes-upgrade-816300" [c793b201-7c0f-4805-ad4a-926ddffbc14e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0317 13:18:30.609373    8032 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-816300" [bbf40bda-7e5a-401b-ba4d-0f7cc2197e62] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0317 13:18:30.609373    8032 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-816300" [44ddb5b4-c593-47c9-9981-f3d213aee511] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0317 13:18:30.609373    8032 system_pods.go:61] "kube-proxy-g6v8v" [c9cf7a98-44d0-42fe-a883-020c690b942c] Running
	I0317 13:18:30.609373    8032 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-816300" [92050c95-6904-43bd-b95f-bb4688cdce6f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0317 13:18:30.609373    8032 system_pods.go:61] "storage-provisioner" [5d66bfa6-10da-42f3-bc95-2b551038aed8] Running
	I0317 13:18:30.609373    8032 system_pods.go:74] duration metric: took 5.8134ms to wait for pod list to return data ...
	I0317 13:18:30.609373    8032 kubeadm.go:582] duration metric: took 595.6507ms to wait for: map[apiserver:true system_pods:true]
	I0317 13:18:30.609373    8032 node_conditions.go:102] verifying NodePressure condition ...
	I0317 13:18:30.614584    8032 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0317 13:18:30.614584    8032 node_conditions.go:123] node cpu capacity is 2
	I0317 13:18:30.614584    8032 node_conditions.go:105] duration metric: took 5.2113ms to run NodePressure ...
	I0317 13:18:30.614584    8032 start.go:241] waiting for startup goroutines ...
	I0317 13:18:32.651441    8032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:18:32.651441    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:18:32.651441    8032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:18:32.651441    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:18:32.653433    8032 kapi.go:59] client config for kubernetes-upgrade-816300: &rest.Config{Host:"https://172.25.31.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\kubernetes-upgrade-816300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\kubernetes-upgrade-816300\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil),
KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2e292e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0317 13:18:32.655084    8032 addons.go:238] Setting addon default-storageclass=true in "kubernetes-upgrade-816300"
	W0317 13:18:32.655185    8032 addons.go:247] addon default-storageclass should already be in state true
	I0317 13:18:32.655185    8032 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0317 13:18:31.481664    7220 main.go:141] libmachine: [stdout =====>] : 
	I0317 13:18:31.482734    7220 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:18:31.482734    7220 main.go:141] libmachine: Waiting for host to start...
	I0317 13:18:31.482734    7220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM docker-flags-664100 ).state
	I0317 13:18:32.655185    8032 host.go:66] Checking if "kubernetes-upgrade-816300" exists ...
	I0317 13:18:32.656458    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-816300 ).state
	I0317 13:18:32.658342    8032 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0317 13:18:32.658342    8032 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0317 13:18:32.658448    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-816300 ).state
	I0317 13:18:34.060504    7220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:18:34.060689    7220 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:18:34.060779    7220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM docker-flags-664100 ).networkadapters[0]).ipaddresses[0]
	I0317 13:18:36.879990    7220 main.go:141] libmachine: [stdout =====>] : 
	I0317 13:18:36.879990    7220 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:18:37.880163    7220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM docker-flags-664100 ).state
	I0317 13:18:35.155783    8032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:18:35.155783    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:18:35.156793    8032 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0317 13:18:35.156793    8032 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0317 13:18:35.156793    8032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:18:35.156866    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:18:35.156866    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-816300 ).state
	I0317 13:18:35.157156    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-816300 ).networkadapters[0]).ipaddresses[0]
	I0317 13:18:37.628557    8032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:18:37.628557    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:18:37.628557    8032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-816300 ).networkadapters[0]).ipaddresses[0]
	I0317 13:18:38.095259    8032 main.go:141] libmachine: [stdout =====>] : 172.25.31.15
	
	I0317 13:18:38.095508    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:18:38.095735    8032 sshutil.go:53] new ssh client: &{IP:172.25.31.15 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\kubernetes-upgrade-816300\id_rsa Username:docker}
	I0317 13:18:38.262703    8032 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0317 13:18:39.487288    8032 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.2245713s)
	I0317 13:18:40.944612    8032 main.go:141] libmachine: [stdout =====>] : 172.25.31.15
	
	I0317 13:18:40.944695    8032 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:18:40.945425    8032 sshutil.go:53] new ssh client: &{IP:172.25.31.15 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\kubernetes-upgrade-816300\id_rsa Username:docker}
	I0317 13:18:41.090706    8032 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0317 13:18:41.352306    8032 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0317 13:18:41.354310    8032 addons.go:514] duration metric: took 11.3404683s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0317 13:18:41.354310    8032 start.go:246] waiting for cluster config update ...
	I0317 13:18:41.354310    8032 start.go:255] writing updated cluster config ...
	I0317 13:18:41.377697    8032 ssh_runner.go:195] Run: rm -f paused
	I0317 13:18:41.542638    8032 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0317 13:18:41.546229    8032 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-816300" cluster and "default" namespace by default
	I0317 13:18:40.684880    7220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:18:40.684880    7220 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:18:40.685363    7220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM docker-flags-664100 ).networkadapters[0]).ipaddresses[0]
	I0317 13:18:43.662921    7220 main.go:141] libmachine: [stdout =====>] : 
	I0317 13:18:43.663054    7220 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:18:44.664163    7220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM docker-flags-664100 ).state
	I0317 13:18:47.122985    7220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:18:47.122985    7220 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:18:47.122985    7220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM docker-flags-664100 ).networkadapters[0]).ipaddresses[0]
	I0317 13:18:49.951690    7220 main.go:141] libmachine: [stdout =====>] : 
	I0317 13:18:49.952452    7220 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:18:50.952904    7220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM docker-flags-664100 ).state
	I0317 13:18:53.552571    7220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:18:53.552571    7220 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:18:53.552571    7220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM docker-flags-664100 ).networkadapters[0]).ipaddresses[0]
	I0317 13:18:56.608434    7220 main.go:141] libmachine: [stdout =====>] : 
	I0317 13:18:56.608434    7220 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:18:57.610037    7220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM docker-flags-664100 ).state
	I0317 13:19:00.229774    7220 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 13:19:00.229774    7220 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:19:00.229774    7220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM docker-flags-664100 ).networkadapters[0]).ipaddresses[0]
	I0317 13:19:03.441056    7220 main.go:141] libmachine: [stdout =====>] : 172.25.31.125
	
	I0317 13:19:03.441447    7220 main.go:141] libmachine: [stderr =====>] : 
	I0317 13:19:03.441617    7220 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM docker-flags-664100 ).state
	
	
	==> Docker <==
	Mar 17 13:17:16 pause-471400 dockerd[4790]: time="2025-03-17T13:17:16.161709711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 17 13:17:16 pause-471400 dockerd[4790]: time="2025-03-17T13:17:16.162187517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 17 13:17:16 pause-471400 dockerd[4790]: time="2025-03-17T13:17:16.204048953Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 17 13:17:16 pause-471400 dockerd[4790]: time="2025-03-17T13:17:16.204206355Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 17 13:17:16 pause-471400 dockerd[4790]: time="2025-03-17T13:17:16.204220456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 17 13:17:16 pause-471400 dockerd[4790]: time="2025-03-17T13:17:16.204318457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 17 13:17:19 pause-471400 cri-dockerd[5064]: time="2025-03-17T13:17:19Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Mar 17 13:17:21 pause-471400 dockerd[4790]: time="2025-03-17T13:17:21.245299358Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 17 13:17:21 pause-471400 dockerd[4790]: time="2025-03-17T13:17:21.245757263Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 17 13:17:21 pause-471400 dockerd[4790]: time="2025-03-17T13:17:21.245876365Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 17 13:17:21 pause-471400 dockerd[4790]: time="2025-03-17T13:17:21.246762575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 17 13:17:21 pause-471400 dockerd[4790]: time="2025-03-17T13:17:21.249588607Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 17 13:17:21 pause-471400 dockerd[4790]: time="2025-03-17T13:17:21.249664208Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 17 13:17:21 pause-471400 dockerd[4790]: time="2025-03-17T13:17:21.249680108Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 17 13:17:21 pause-471400 dockerd[4790]: time="2025-03-17T13:17:21.249960812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 17 13:17:21 pause-471400 cri-dockerd[5064]: time="2025-03-17T13:17:21Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e24eb091f5348b0ae125306f5a32c689a643271dc3d4455fa127281466cc5bc0/resolv.conf as [nameserver 172.25.16.1]"
	Mar 17 13:17:21 pause-471400 cri-dockerd[5064]: time="2025-03-17T13:17:21Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f960f21183ccec1babada0111abfee90aa2dcdcdb68df584cc369e1e1372f515/resolv.conf as [nameserver 172.25.16.1]"
	Mar 17 13:17:21 pause-471400 dockerd[4790]: time="2025-03-17T13:17:21.632997819Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 17 13:17:21 pause-471400 dockerd[4790]: time="2025-03-17T13:17:21.633108620Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 17 13:17:21 pause-471400 dockerd[4790]: time="2025-03-17T13:17:21.633169821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 17 13:17:21 pause-471400 dockerd[4790]: time="2025-03-17T13:17:21.633345022Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 17 13:17:21 pause-471400 dockerd[4790]: time="2025-03-17T13:17:21.953291775Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 17 13:17:21 pause-471400 dockerd[4790]: time="2025-03-17T13:17:21.953667879Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 17 13:17:21 pause-471400 dockerd[4790]: time="2025-03-17T13:17:21.953767180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 17 13:17:21 pause-471400 dockerd[4790]: time="2025-03-17T13:17:21.954143984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	30a118f025650       c69fa2e9cbf5f       About a minute ago   Running             coredns                   1                   f960f21183cce       coredns-668d6bf9bc-2xpj4
	ed4022dcab640       f1332858868e1       About a minute ago   Running             kube-proxy                1                   e24eb091f5348       kube-proxy-2w5n2
	384fe5c06f2df       85b7a174738ba       2 minutes ago        Running             kube-apiserver            1                   fd2fa1dccb7a9       kube-apiserver-pause-471400
	e881d41583c35       a9e7e6b294baf       2 minutes ago        Running             etcd                      1                   0d9f0b8d8c9e1       etcd-pause-471400
	9b55e85d3d127       b6a454c5a800d       2 minutes ago        Running             kube-controller-manager   1                   f17938a9116db       kube-controller-manager-pause-471400
	c57a64a068ec0       d8e673e7c9983       2 minutes ago        Running             kube-scheduler            1                   b42e93259bac8       kube-scheduler-pause-471400
	d4f557bada235       c69fa2e9cbf5f       8 minutes ago        Exited              coredns                   0                   c0ee58f77451c       coredns-668d6bf9bc-2xpj4
	bec7db06f9e97       f1332858868e1       8 minutes ago        Exited              kube-proxy                0                   0b09a5de1f0b4       kube-proxy-2w5n2
	cecd0f7a3b605       a9e7e6b294baf       8 minutes ago        Exited              etcd                      0                   587f0dda7141f       etcd-pause-471400
	48490bf5143cc       d8e673e7c9983       8 minutes ago        Exited              kube-scheduler            0                   c59c53abe2ddc       kube-scheduler-pause-471400
	a7133843d6ed6       b6a454c5a800d       8 minutes ago        Exited              kube-controller-manager   0                   a77930f3d721b       kube-controller-manager-pause-471400
	5d96f9d335dfb       85b7a174738ba       8 minutes ago        Exited              kube-apiserver            0                   4a105f3090f3f       kube-apiserver-pause-471400
	
	
	==> coredns [30a118f02565] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1d9c7cb05d14a915f04974bf55cf5686cd43414eb293ac9a790a39f065db1c589d13dfd7b12923475c8499a18e0bdc26041d87eeb9e9602ff2cbbc57da44e2c0
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:45307 - 28259 "HINFO IN 85235851623009837.2427239534048236081. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.552204713s
	
	
	==> coredns [d4f557bada23] <==
	[INFO] plugin/reload: Running configuration SHA512 = 1d9c7cb05d14a915f04974bf55cf5686cd43414eb293ac9a790a39f065db1c589d13dfd7b12923475c8499a18e0bdc26041d87eeb9e9602ff2cbbc57da44e2c0
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:46201 - 39183 "HINFO IN 6984254641872043389.8388004187190449982. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.029247159s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1613281540]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Mar-2025 13:10:33.338) (total time: 30005ms):
	Trace[1613281540]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30004ms (13:11:03.342)
	Trace[1613281540]: [30.005282978s] [30.005282978s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[11254756]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Mar-2025 13:10:33.337) (total time: 30006ms):
	Trace[11254756]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30004ms (13:11:03.342)
	Trace[11254756]: [30.006267481s] [30.006267481s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[805144725]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Mar-2025 13:10:33.342) (total time: 30002ms):
	Trace[805144725]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (13:11:03.345)
	Trace[805144725]: [30.002919579s] [30.002919579s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	
	
	==> dmesg <==
	[  +7.995793] systemd-fstab-generator[1857]: Ignoring "noauto" option for root device
	[  +0.124170] kauditd_printk_skb: 74 callbacks suppressed
	[  +8.557395] systemd-fstab-generator[2288]: Ignoring "noauto" option for root device
	[  +0.146230] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.717688] systemd-fstab-generator[2406]: Ignoring "noauto" option for root device
	[  +0.192992] kauditd_printk_skb: 12 callbacks suppressed
	[Mar17 13:11] kauditd_printk_skb: 67 callbacks suppressed
	[Mar17 13:16] systemd-fstab-generator[4346]: Ignoring "noauto" option for root device
	[  +0.705807] systemd-fstab-generator[4383]: Ignoring "noauto" option for root device
	[  +0.295876] systemd-fstab-generator[4395]: Ignoring "noauto" option for root device
	[  +0.305026] systemd-fstab-generator[4423]: Ignoring "noauto" option for root device
	[Mar17 13:17] kauditd_printk_skb: 89 callbacks suppressed
	[  +8.248518] systemd-fstab-generator[5013]: Ignoring "noauto" option for root device
	[  +0.223294] systemd-fstab-generator[5025]: Ignoring "noauto" option for root device
	[  +0.218731] systemd-fstab-generator[5037]: Ignoring "noauto" option for root device
	[  +0.311982] systemd-fstab-generator[5052]: Ignoring "noauto" option for root device
	[  +1.000949] systemd-fstab-generator[5222]: Ignoring "noauto" option for root device
	[  +0.127940] kauditd_printk_skb: 119 callbacks suppressed
	[  +3.709042] systemd-fstab-generator[5341]: Ignoring "noauto" option for root device
	[  +1.441375] kauditd_printk_skb: 39 callbacks suppressed
	[  +5.258211] kauditd_printk_skb: 25 callbacks suppressed
	[  +8.993660] kauditd_printk_skb: 25 callbacks suppressed
	[  +4.824841] systemd-fstab-generator[6220]: Ignoring "noauto" option for root device
	[ +11.225498] systemd-fstab-generator[6299]: Ignoring "noauto" option for root device
	[  +0.157997] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [cecd0f7a3b60] <==
	{"level":"warn","ts":"2025-03-17T13:11:50.338989Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"141.176536ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2965503412227013932 > lease_revoke:<id:292795a439fffcf0>","response":"size:27"}
	{"level":"info","ts":"2025-03-17T13:11:50.340172Z","caller":"traceutil/trace.go:171","msg":"trace[737798938] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:397; }","duration":"123.303543ms","start":"2025-03-17T13:11:50.216843Z","end":"2025-03-17T13:11:50.340146Z","steps":["trace[737798938] 'range keys from in-memory index tree'  (duration: 122.252738ms)"],"step_count":1}
	{"level":"info","ts":"2025-03-17T13:14:58.143162Z","caller":"traceutil/trace.go:171","msg":"trace[1053069170] transaction","detail":"{read_only:false; response_revision:453; number_of_response:1; }","duration":"146.517709ms","start":"2025-03-17T13:14:57.996622Z","end":"2025-03-17T13:14:58.143140Z","steps":["trace[1053069170] 'process raft request'  (duration: 146.386207ms)"],"step_count":1}
	{"level":"warn","ts":"2025-03-17T13:14:58.441752Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"225.383405ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-03-17T13:14:58.441939Z","caller":"traceutil/trace.go:171","msg":"trace[2100850140] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:453; }","duration":"225.593909ms","start":"2025-03-17T13:14:58.216329Z","end":"2025-03-17T13:14:58.441923Z","steps":["trace[2100850140] 'range keys from in-memory index tree'  (duration: 225.362605ms)"],"step_count":1}
	{"level":"info","ts":"2025-03-17T13:15:02.295694Z","caller":"traceutil/trace.go:171","msg":"trace[193129194] transaction","detail":"{read_only:false; response_revision:454; number_of_response:1; }","duration":"190.297679ms","start":"2025-03-17T13:15:02.105379Z","end":"2025-03-17T13:15:02.295677Z","steps":["trace[193129194] 'process raft request'  (duration: 189.894872ms)"],"step_count":1}
	{"level":"info","ts":"2025-03-17T13:15:04.977280Z","caller":"traceutil/trace.go:171","msg":"trace[1359934569] transaction","detail":"{read_only:false; response_revision:455; number_of_response:1; }","duration":"135.604477ms","start":"2025-03-17T13:15:04.841646Z","end":"2025-03-17T13:15:04.977251Z","steps":["trace[1359934569] 'process raft request'  (duration: 111.478489ms)","trace[1359934569] 'compare'  (duration: 24.016686ms)"],"step_count":2}
	{"level":"info","ts":"2025-03-17T13:15:10.999162Z","caller":"traceutil/trace.go:171","msg":"trace[258622077] transaction","detail":"{read_only:false; response_revision:457; number_of_response:1; }","duration":"119.593274ms","start":"2025-03-17T13:15:10.879547Z","end":"2025-03-17T13:15:10.999141Z","steps":["trace[258622077] 'process raft request'  (duration: 119.262269ms)"],"step_count":1}
	{"level":"warn","ts":"2025-03-17T13:15:11.391304Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"175.088434ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-03-17T13:15:11.391384Z","caller":"traceutil/trace.go:171","msg":"trace[828642724] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:457; }","duration":"175.189035ms","start":"2025-03-17T13:15:11.216180Z","end":"2025-03-17T13:15:11.391370Z","steps":["trace[828642724] 'range keys from in-memory index tree'  (duration: 175.078034ms)"],"step_count":1}
	{"level":"warn","ts":"2025-03-17T13:15:11.392058Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"114.990995ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-03-17T13:15:11.392173Z","caller":"traceutil/trace.go:171","msg":"trace[2034534485] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:457; }","duration":"115.137998ms","start":"2025-03-17T13:15:11.277024Z","end":"2025-03-17T13:15:11.392162Z","steps":["trace[2034534485] 'range keys from in-memory index tree'  (duration: 114.932794ms)"],"step_count":1}
	{"level":"warn","ts":"2025-03-17T13:16:00.101294Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.268497ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2965503412227014895 > lease_revoke:<id:292795a43a0000b1>","response":"size:27"}
	{"level":"warn","ts":"2025-03-17T13:16:05.353991Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.345465ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-03-17T13:16:05.354111Z","caller":"traceutil/trace.go:171","msg":"trace[472980301] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:475; }","duration":"136.488467ms","start":"2025-03-17T13:16:05.217606Z","end":"2025-03-17T13:16:05.354095Z","steps":["trace[472980301] 'range keys from in-memory index tree'  (duration: 136.329365ms)"],"step_count":1}
	{"level":"info","ts":"2025-03-17T13:16:55.404983Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-03-17T13:16:55.405138Z","caller":"embed/etcd.go:378","msg":"closing etcd server","name":"pause-471400","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://172.25.31.3:2380"],"advertise-client-urls":["https://172.25.31.3:2379"]}
	{"level":"warn","ts":"2025-03-17T13:16:55.405236Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-03-17T13:16:55.405337Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-03-17T13:16:55.495271Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 172.25.31.3:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-03-17T13:16:55.495354Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 172.25.31.3:2379: use of closed network connection"}
	{"level":"info","ts":"2025-03-17T13:16:55.495409Z","caller":"etcdserver/server.go:1543","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"5591e26a9a7b2927","current-leader-member-id":"5591e26a9a7b2927"}
	{"level":"info","ts":"2025-03-17T13:16:55.510191Z","caller":"embed/etcd.go:582","msg":"stopping serving peer traffic","address":"172.25.31.3:2380"}
	{"level":"info","ts":"2025-03-17T13:16:55.510521Z","caller":"embed/etcd.go:587","msg":"stopped serving peer traffic","address":"172.25.31.3:2380"}
	{"level":"info","ts":"2025-03-17T13:16:55.510537Z","caller":"embed/etcd.go:380","msg":"closed etcd server","name":"pause-471400","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://172.25.31.3:2380"],"advertise-client-urls":["https://172.25.31.3:2379"]}
	
	
	==> etcd [e881d41583c3] <==
	{"level":"info","ts":"2025-03-17T13:17:16.547055Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"594dfe8495e74bd1","local-member-id":"5591e26a9a7b2927","added-peer-id":"5591e26a9a7b2927","added-peer-peer-urls":["https://172.25.31.3:2380"]}
	{"level":"info","ts":"2025-03-17T13:17:16.547363Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"594dfe8495e74bd1","local-member-id":"5591e26a9a7b2927","cluster-version":"3.5"}
	{"level":"info","ts":"2025-03-17T13:17:16.548207Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-03-17T13:17:16.584738Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-03-17T13:17:16.592848Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"172.25.31.3:2380"}
	{"level":"info","ts":"2025-03-17T13:17:16.595655Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"172.25.31.3:2380"}
	{"level":"info","ts":"2025-03-17T13:17:16.591972Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-03-17T13:17:16.598686Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"5591e26a9a7b2927","initial-advertise-peer-urls":["https://172.25.31.3:2380"],"listen-peer-urls":["https://172.25.31.3:2380"],"advertise-client-urls":["https://172.25.31.3:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.25.31.3:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-03-17T13:17:16.599700Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-03-17T13:17:17.585141Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5591e26a9a7b2927 is starting a new election at term 2"}
	{"level":"info","ts":"2025-03-17T13:17:17.585728Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5591e26a9a7b2927 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-03-17T13:17:17.585987Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5591e26a9a7b2927 received MsgPreVoteResp from 5591e26a9a7b2927 at term 2"}
	{"level":"info","ts":"2025-03-17T13:17:17.586296Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5591e26a9a7b2927 became candidate at term 3"}
	{"level":"info","ts":"2025-03-17T13:17:17.586433Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5591e26a9a7b2927 received MsgVoteResp from 5591e26a9a7b2927 at term 3"}
	{"level":"info","ts":"2025-03-17T13:17:17.586643Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5591e26a9a7b2927 became leader at term 3"}
	{"level":"info","ts":"2025-03-17T13:17:17.586795Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 5591e26a9a7b2927 elected leader 5591e26a9a7b2927 at term 3"}
	{"level":"info","ts":"2025-03-17T13:17:17.602506Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"5591e26a9a7b2927","local-member-attributes":"{Name:pause-471400 ClientURLs:[https://172.25.31.3:2379]}","request-path":"/0/members/5591e26a9a7b2927/attributes","cluster-id":"594dfe8495e74bd1","publish-timeout":"7s"}
	{"level":"info","ts":"2025-03-17T13:17:17.603405Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-03-17T13:17:17.604430Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-03-17T13:17:17.611742Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.25.31.3:2379"}
	{"level":"info","ts":"2025-03-17T13:17:17.612246Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-03-17T13:17:17.612830Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-03-17T13:17:17.623815Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-03-17T13:17:17.628168Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-03-17T13:17:17.628400Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 13:19:27 up 11 min,  0 users,  load average: 0.36, 0.59, 0.31
	Linux pause-471400 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [384fe5c06f2d] <==
	I0317 13:17:19.853939       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0317 13:17:19.877532       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0317 13:17:19.877858       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0317 13:17:19.878317       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0317 13:17:19.878917       1 aggregator.go:171] initial CRD sync complete...
	I0317 13:17:19.879177       1 autoregister_controller.go:144] Starting autoregister controller
	I0317 13:17:19.879463       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0317 13:17:19.879676       1 cache.go:39] Caches are synced for autoregister controller
	I0317 13:17:19.899791       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0317 13:17:19.915654       1 shared_informer.go:320] Caches are synced for configmaps
	I0317 13:17:19.915993       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0317 13:17:19.916354       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0317 13:17:19.920202       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0317 13:17:19.923408       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0317 13:17:19.940168       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0317 13:17:20.615620       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0317 13:17:20.735965       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0317 13:17:21.230299       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.25.31.3]
	I0317 13:17:21.231913       1 controller.go:615] quota admission added evaluator for: endpoints
	I0317 13:17:21.247969       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0317 13:17:22.232797       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0317 13:17:22.336650       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0317 13:17:22.412450       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0317 13:17:22.448236       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0317 13:17:23.340983       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-apiserver [5d96f9d335df] <==
	W0317 13:17:04.647526       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:17:04.672443       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:17:04.687878       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:17:04.710901       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:17:04.740911       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:17:04.781912       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:17:04.817857       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:17:04.819414       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:17:04.832338       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:17:04.907780       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:17:04.919812       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:17:04.930836       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:17:04.952629       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:17:04.961532       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:17:04.999510       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:17:05.001029       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:17:05.019544       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:17:05.048122       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:17:05.069779       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:17:05.069814       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:17:05.091638       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:17:05.112756       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:17:05.197283       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:17:05.246887       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0317 13:17:05.292253       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [9b55e85d3d12] <==
	I0317 13:17:23.080296       1 shared_informer.go:320] Caches are synced for deployment
	I0317 13:17:23.081584       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0317 13:17:23.084547       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0317 13:17:23.085804       1 shared_informer.go:320] Caches are synced for expand
	I0317 13:17:23.090281       1 shared_informer.go:320] Caches are synced for node
	I0317 13:17:23.090411       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0317 13:17:23.090494       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0317 13:17:23.090502       1 shared_informer.go:320] Caches are synced for HPA
	I0317 13:17:23.090830       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0317 13:17:23.091129       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0317 13:17:23.091364       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-471400"
	I0317 13:17:23.091684       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0317 13:17:23.095407       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0317 13:17:23.097174       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0317 13:17:23.099596       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0317 13:17:23.101921       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0317 13:17:23.117369       1 shared_informer.go:320] Caches are synced for garbage collector
	I0317 13:17:23.117687       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0317 13:17:23.117790       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0317 13:17:23.117393       1 shared_informer.go:320] Caches are synced for garbage collector
	I0317 13:17:23.358357       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="288.537752ms"
	I0317 13:17:23.406361       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="47.830211ms"
	I0317 13:17:23.406654       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="76.9µs"
	I0317 13:17:29.857142       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="38.129714ms"
	I0317 13:17:29.858277       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="1.078095ms"
	
	
	==> kube-controller-manager [a7133843d6ed] <==
	I0317 13:10:29.922135       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-471400"
	I0317 13:10:29.922435       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-471400"
	I0317 13:10:29.924024       1 shared_informer.go:320] Caches are synced for persistent volume
	I0317 13:10:29.926361       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0317 13:10:29.943211       1 shared_informer.go:320] Caches are synced for ephemeral
	I0317 13:10:29.943268       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0317 13:10:29.944165       1 shared_informer.go:320] Caches are synced for resource quota
	I0317 13:10:29.980845       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-471400"
	I0317 13:10:30.162139       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-471400"
	I0317 13:10:31.098735       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="362.465218ms"
	I0317 13:10:31.145521       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="46.665715ms"
	I0317 13:10:31.146629       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="73.902µs"
	I0317 13:10:31.192066       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="158.805µs"
	I0317 13:10:31.268032       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="102.703µs"
	I0317 13:10:31.736967       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="54.465068ms"
	I0317 13:10:31.751111       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="13.608742ms"
	I0317 13:10:31.751271       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="36.001µs"
	I0317 13:10:33.056446       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="63.9µs"
	I0317 13:10:33.089915       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="91.4µs"
	I0317 13:10:33.108387       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="63µs"
	I0317 13:10:33.113501       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="113.8µs"
	I0317 13:10:35.939212       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-471400"
	I0317 13:11:10.093929       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="18.323371ms"
	I0317 13:11:10.094245       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="135.8µs"
	I0317 13:15:11.002912       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-471400"
	
	
	==> kube-proxy [bec7db06f9e9] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0317 13:10:33.466097       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0317 13:10:33.507364       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["172.25.31.3"]
	E0317 13:10:33.508007       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0317 13:10:33.571022       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0317 13:10:33.571126       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0317 13:10:33.571156       1 server_linux.go:170] "Using iptables Proxier"
	I0317 13:10:33.575685       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0317 13:10:33.579156       1 server.go:497] "Version info" version="v1.32.2"
	I0317 13:10:33.579485       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0317 13:10:33.587194       1 config.go:105] "Starting endpoint slice config controller"
	I0317 13:10:33.588568       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0317 13:10:33.588619       1 config.go:199] "Starting service config controller"
	I0317 13:10:33.588627       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0317 13:10:33.591465       1 config.go:329] "Starting node config controller"
	I0317 13:10:33.591991       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0317 13:10:33.689240       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0317 13:10:33.689161       1 shared_informer.go:320] Caches are synced for service config
	I0317 13:10:33.692206       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [ed4022dcab64] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0317 13:17:22.059127       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0317 13:17:22.092147       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["172.25.31.3"]
	E0317 13:17:22.092370       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0317 13:17:22.192588       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0317 13:17:22.192889       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0317 13:17:22.199226       1 server_linux.go:170] "Using iptables Proxier"
	I0317 13:17:22.208333       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0317 13:17:22.208871       1 server.go:497] "Version info" version="v1.32.2"
	I0317 13:17:22.209593       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0317 13:17:22.223356       1 config.go:199] "Starting service config controller"
	I0317 13:17:22.226798       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0317 13:17:22.228512       1 config.go:105] "Starting endpoint slice config controller"
	I0317 13:17:22.228601       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0317 13:17:22.231568       1 config.go:329] "Starting node config controller"
	I0317 13:17:22.231923       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0317 13:17:22.327371       1 shared_informer.go:320] Caches are synced for service config
	I0317 13:17:22.329054       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0317 13:17:22.332838       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [48490bf5143c] <==
	W0317 13:10:23.590405       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0317 13:10:23.590523       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0317 13:10:23.610060       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0317 13:10:23.610153       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 13:10:23.738688       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0317 13:10:23.740693       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 13:10:23.837985       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0317 13:10:23.838097       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0317 13:10:23.850631       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0317 13:10:23.851070       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0317 13:10:23.851554       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0317 13:10:23.852226       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0317 13:10:23.932768       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0317 13:10:23.933225       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0317 13:10:23.981700       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0317 13:10:23.981821       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0317 13:10:23.995757       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0317 13:10:23.996212       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0317 13:10:24.016153       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0317 13:10:24.016320       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0317 13:10:26.801476       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0317 13:16:55.425674       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0317 13:16:55.425708       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0317 13:16:55.426021       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0317 13:16:55.422034       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [c57a64a068ec] <==
	I0317 13:17:17.605641       1 serving.go:386] Generated self-signed cert in-memory
	W0317 13:17:19.772491       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0317 13:17:19.772549       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0317 13:17:19.772562       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0317 13:17:19.772572       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0317 13:17:19.846337       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.2"
	I0317 13:17:19.849158       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0317 13:17:19.855004       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0317 13:17:19.858217       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0317 13:17:19.858302       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0317 13:17:19.859333       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0317 13:17:19.959694       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 17 13:17:18 pause-471400 kubelet[5348]: E0317 13:17:18.998849    5348 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-471400\" not found" node="pause-471400"
	Mar 17 13:17:19 pause-471400 kubelet[5348]: E0317 13:17:19.000448    5348 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-471400\" not found" node="pause-471400"
	Mar 17 13:17:19 pause-471400 kubelet[5348]: E0317 13:17:19.000688    5348 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-471400\" not found" node="pause-471400"
	Mar 17 13:17:19 pause-471400 kubelet[5348]: I0317 13:17:19.928847    5348 kubelet_node_status.go:125] "Node was previously registered" node="pause-471400"
	Mar 17 13:17:19 pause-471400 kubelet[5348]: I0317 13:17:19.928966    5348 kubelet_node_status.go:79] "Successfully registered node" node="pause-471400"
	Mar 17 13:17:19 pause-471400 kubelet[5348]: I0317 13:17:19.929002    5348 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Mar 17 13:17:19 pause-471400 kubelet[5348]: I0317 13:17:19.929917    5348 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Mar 17 13:17:19 pause-471400 kubelet[5348]: I0317 13:17:19.944739    5348 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-471400"
	Mar 17 13:17:19 pause-471400 kubelet[5348]: E0317 13:17:19.966658    5348 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-471400\" already exists" pod="kube-system/kube-scheduler-pause-471400"
	Mar 17 13:17:19 pause-471400 kubelet[5348]: I0317 13:17:19.966707    5348 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-471400"
	Mar 17 13:17:19 pause-471400 kubelet[5348]: E0317 13:17:19.980976    5348 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-pause-471400\" already exists" pod="kube-system/etcd-pause-471400"
	Mar 17 13:17:19 pause-471400 kubelet[5348]: I0317 13:17:19.981115    5348 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-471400"
	Mar 17 13:17:19 pause-471400 kubelet[5348]: E0317 13:17:19.994215    5348 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-471400\" already exists" pod="kube-system/kube-apiserver-pause-471400"
	Mar 17 13:17:19 pause-471400 kubelet[5348]: I0317 13:17:19.994259    5348 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-471400"
	Mar 17 13:17:20 pause-471400 kubelet[5348]: E0317 13:17:20.030676    5348 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-471400\" already exists" pod="kube-system/kube-controller-manager-pause-471400"
	Mar 17 13:17:20 pause-471400 kubelet[5348]: I0317 13:17:20.529396    5348 apiserver.go:52] "Watching apiserver"
	Mar 17 13:17:20 pause-471400 kubelet[5348]: I0317 13:17:20.546946    5348 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Mar 17 13:17:20 pause-471400 kubelet[5348]: I0317 13:17:20.601451    5348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2be3017-491d-427e-982e-7fcdf387b94a-lib-modules\") pod \"kube-proxy-2w5n2\" (UID: \"d2be3017-491d-427e-982e-7fcdf387b94a\") " pod="kube-system/kube-proxy-2w5n2"
	Mar 17 13:17:20 pause-471400 kubelet[5348]: I0317 13:17:20.601864    5348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d2be3017-491d-427e-982e-7fcdf387b94a-xtables-lock\") pod \"kube-proxy-2w5n2\" (UID: \"d2be3017-491d-427e-982e-7fcdf387b94a\") " pod="kube-system/kube-proxy-2w5n2"
	Mar 17 13:17:24 pause-471400 kubelet[5348]: I0317 13:17:24.247732    5348 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Mar 17 13:17:29 pause-471400 kubelet[5348]: I0317 13:17:29.791896    5348 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Mar 17 13:17:46 pause-471400 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Mar 17 13:17:46 pause-471400 systemd[1]: kubelet.service: Deactivated successfully.
	Mar 17 13:17:46 pause-471400 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Mar 17 13:17:46 pause-471400 systemd[1]: kubelet.service: Consumed 1.585s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-471400 -n pause-471400
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-471400 -n pause-471400: exit status 2 (13.4129325s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "pause-471400" apiserver is not running, skipping kubectl commands (state="Paused")
--- FAIL: TestPause/serial/Unpause (103.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (10800.411s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-flannel-841900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata\kube-flannel.yaml --driver=hyperv
panic: test timed out after 3h0m0s
	running tests:
		TestNetworkPlugins (29m54s)
		TestNetworkPlugins/group/auto (5m10s)
		TestNetworkPlugins/group/auto/Start (5m10s)
		TestNetworkPlugins/group/calico (49s)
		TestNetworkPlugins/group/calico/Start (49s)
		TestNetworkPlugins/group/custom-flannel (10s)
		TestNetworkPlugins/group/custom-flannel/Start (10s)
		TestNetworkPlugins/group/kindnet (3m30s)
		TestNetworkPlugins/group/kindnet/Start (3m30s)
		TestStartStop (22m33s)

                                                
                                                
goroutine 2388 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2484 +0x394
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x2d

                                                
                                                
goroutine 1 [chan receive, 2 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1753 +0x486
testing.tRunner(0xc000003dc0, 0xc00008bbc8)
	/usr/local/go/src/testing/testing.go:1798 +0x104
testing.runTests(0xc000124228, {0x56872c0, 0x2b, 0x2b}, {0xffffffffffffffff?, 0xc000afd450?, 0x56ae640?})
	/usr/local/go/src/testing/testing.go:2277 +0x4b4
testing.(*M).Run(0xc0007d32c0)
	/usr/local/go/src/testing/testing.go:2142 +0x64a
k8s.io/minikube/test/integration.TestMain(0xc0007d32c0)
	/home/jenkins/workspace/Build_Cross/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0xa8

                                                
                                                
goroutine 151 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3c48810, 0xc000078310}, 0xc00001bf50, 0xc00001bf98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3c48810, 0xc000078310}, 0xa0?, 0xc00001bf50, 0xc00001bf98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3c48810?, 0xc000078310?}, 0x0?, 0x7335c0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00001bfd0?, 0x76cc04?, 0xc00096a480?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 165
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 822 [sync.Cond.Wait, 6 minutes]:
sync.runtime_notifyListWait(0xc0007f37d0, 0x35)
	/usr/local/go/src/runtime/sema.go:597 +0x15d
sync.(*Cond).Wait(0xc0016a7d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3c5c4e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/util/workqueue/queue.go:277 +0x86
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0007f3800)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:159 +0x44
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc001848808?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000b02680, {0x3c096c0, 0xc0014a7470}, 0x1, 0xc000078310)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000b02680, 0x3b9aca00, 0x0, 0x1, 0xc000078310)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 867
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 2074 [chan receive, 22 minutes]:
testing.(*T).Run(0xc0016008c0, {0x2f14d55?, 0x732c53?}, 0x38af9d8)
	/usr/local/go/src/testing/testing.go:1859 +0x414
k8s.io/minikube/test/integration.TestStartStop(0xc0016008c0)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc0016008c0, 0x38af7f8)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2224 [chan receive, 22 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1753 +0x486
testing.tRunner(0xc00197c380, 0x38af9d8)
	/usr/local/go/src/testing/testing.go:1798 +0x104
created by testing.(*T).Run in goroutine 2074
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 866 [select, 2 minutes]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3c59620)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/util/workqueue/delaying_queue.go:311 +0x345
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 845
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/util/workqueue/delaying_queue.go:148 +0x245

                                                
                                                
goroutine 164 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3c59620)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/util/workqueue/delaying_queue.go:311 +0x345
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 163
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/util/workqueue/delaying_queue.go:148 +0x245

                                                
                                                
goroutine 150 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc00098fa10, 0x3b)
	/usr/local/go/src/runtime/sema.go:597 +0x15d
sync.(*Cond).Wait(0xc001439d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3c5c4e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/util/workqueue/queue.go:277 +0x86
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00098fa40)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:159 +0x44
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000a0e008?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00051a000, {0x3c096c0, 0xc0014a6000}, 0x1, 0xc000078310)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00051a000, 0x3b9aca00, 0x0, 0x1, 0xc000078310)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 165
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 165 [chan receive, 173 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00098fa40, 0xc000078310)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 163
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cache.go:122 +0x569

                                                
                                                
goroutine 152 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 151
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 1316 [chan send, 142 minutes]:
os/exec.(*Cmd).watchCtx(0xc00075d200, 0xc0017e0e70)
	/usr/local/go/src/os/exec/exec.go:814 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 836
	/usr/local/go/src/os/exec/exec.go:775 +0x989

                                                
                                                
goroutine 2013 [chan receive, 30 minutes]:
testing.(*T).Run(0xc0015f01c0, {0x2f14d55?, 0xc00140ff60?}, 0xc00157a180)
	/usr/local/go/src/testing/testing.go:1859 +0x414
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc0015f01c0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:52 +0xd3
testing.tRunner(0xc0015f01c0, 0x38af7b0)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2321 [select, 6 minutes]:
os/exec.(*Cmd).watchCtx(0xc0007efe00, 0xc0017f08c0)
	/usr/local/go/src/os/exec/exec.go:789 +0xb2
created by os/exec.(*Cmd).Start in goroutine 2318
	/usr/local/go/src/os/exec/exec.go:775 +0x989

                                                
                                                
goroutine 2140 [chan receive, 30 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1753 +0x486
testing.tRunner(0xc0014f3a40, 0xc00157a180)
	/usr/local/go/src/testing/testing.go:1798 +0x104
created by testing.(*T).Run in goroutine 2013
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 867 [chan receive, 150 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0007f3800, 0xc000078310)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 845
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cache.go:122 +0x569

                                                
                                                
goroutine 824 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 823
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 823 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3c48810, 0xc000078310}, 0xc0017aff50, 0xc0017aff98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3c48810, 0xc000078310}, 0x90?, 0xc0017aff50, 0xc0017aff98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3c48810?, 0xc000078310?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0017affd0?, 0x76cc04?, 0xc000108d20?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 867
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 701 [IO wait, 161 minutes]:
internal/poll.runtime_pollWait(0x198f5373f10, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0x69cbb3?, 0x0?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.execIO(0xc001994020, 0xc00159dba0)
	/usr/local/go/src/internal/poll/fd_windows.go:177 +0x105
internal/poll.(*FD).acceptOne(0xc001994008, 0x430, {0xc000b6a2d0?, 0xc00159dc00?, 0x6a72e5?}, 0xc00159dc34?)
	/usr/local/go/src/internal/poll/fd_windows.go:946 +0x65
internal/poll.(*FD).Accept(0xc001994008, 0xc00159dd80)
	/usr/local/go/src/internal/poll/fd_windows.go:980 +0x1b6
net.(*netFD).accept(0xc001994008)
	/usr/local/go/src/net/fd_windows.go:182 +0x4b
net.(*TCPListener).accept(0xc0007f2f80)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1b
net.(*TCPListener).Accept(0xc0007f2f80)
	/usr/local/go/src/net/tcpsock.go:380 +0x30
net/http.(*Server).Serve(0xc000698500, {0x3c371c0, 0xc0007f2f80})
	/usr/local/go/src/net/http/server.go:3424 +0x30c
net/http.(*Server).ListenAndServe(0xc000698500)
	/usr/local/go/src/net/http/server.go:3350 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(...)
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2230
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 698
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2229 +0x129

                                                
                                                
goroutine 2318 [syscall, 6 minutes]:
syscall.Syscall(0xc0014cfd00?, 0x0?, 0x73043b?, 0x1000000000000?, 0x1e?)
	/usr/local/go/src/runtime/syscall_windows.go:457 +0x29
syscall.WaitForSingleObject(0x580, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1149 +0x5a
os.(*Process).wait(0xc0007efe00?)
	/usr/local/go/src/os/exec_windows.go:28 +0xe6
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc0007efe00)
	/usr/local/go/src/os/exec/exec.go:922 +0x45
os/exec.(*Cmd).Run(0xc0007efe00)
	/usr/local/go/src/os/exec/exec.go:626 +0x2d
k8s.io/minikube/test/integration.Run(0xc000507dc0, 0xc0007efe00)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.1(0xc000507dc0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:112 +0x4b
testing.tRunner(0xc000507dc0, 0xc00151a6c0)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2141
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2291 [chan receive, 22 minutes]:
testing.(*testState).waitParallel(0xc00002a1e0)
	/usr/local/go/src/testing/testing.go:1926 +0xaf
testing.(*T).Parallel(0xc00197c8c0)
	/usr/local/go/src/testing/testing.go:1578 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc00197c8c0)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00197c8c0)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:92 +0x45
testing.tRunner(0xc00197c8c0, 0xc000810580)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2224
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2225 [chan receive, 22 minutes]:
testing.(*testState).waitParallel(0xc00002a1e0)
	/usr/local/go/src/testing/testing.go:1926 +0xaf
testing.(*T).Parallel(0xc00197c540)
	/usr/local/go/src/testing/testing.go:1578 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc00197c540)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00197c540)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:92 +0x45
testing.tRunner(0xc00197c540, 0xc000810440)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2224
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2290 [chan receive, 22 minutes]:
testing.(*testState).waitParallel(0xc00002a1e0)
	/usr/local/go/src/testing/testing.go:1926 +0xaf
testing.(*T).Parallel(0xc00197c700)
	/usr/local/go/src/testing/testing.go:1578 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc00197c700)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00197c700)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:92 +0x45
testing.tRunner(0xc00197c700, 0xc000810480)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2224
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2142 [chan receive, 30 minutes]:
testing.(*testState).waitParallel(0xc00002a1e0)
	/usr/local/go/src/testing/testing.go:1926 +0xaf
testing.(*T).Parallel(0xc000606a80)
	/usr/local/go/src/testing/testing.go:1578 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc000606a80)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000606a80)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc000606a80, 0xc0005ba680)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2140
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2141 [chan receive, 6 minutes]:
testing.(*T).Run(0xc000606700, {0x2f14d5a?, 0x3bff108?}, 0xc00151a6c0)
	/usr/local/go/src/testing/testing.go:1859 +0x414
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000606700)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:111 +0x5bb
testing.tRunner(0xc000606700, 0xc0005ba580)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2140
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2143 [chan receive, 30 minutes]:
testing.(*testState).waitParallel(0xc00002a1e0)
	/usr/local/go/src/testing/testing.go:1926 +0xaf
testing.(*T).Parallel(0xc000606e00)
	/usr/local/go/src/testing/testing.go:1578 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc000606e00)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000606e00)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc000606e00, 0xc0005ba700)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2140
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2144 [chan receive, 30 minutes]:
testing.(*testState).waitParallel(0xc00002a1e0)
	/usr/local/go/src/testing/testing.go:1926 +0xaf
testing.(*T).Parallel(0xc000607880)
	/usr/local/go/src/testing/testing.go:1578 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc000607880)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000607880)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc000607880, 0xc0005ba780)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2140
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2145 [chan receive, 30 minutes]:
testing.(*testState).waitParallel(0xc00002a1e0)
	/usr/local/go/src/testing/testing.go:1926 +0xaf
testing.(*T).Parallel(0xc001a06380)
	/usr/local/go/src/testing/testing.go:1578 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc001a06380)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc001a06380)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc001a06380, 0xc0005ba800)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2140
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2146 [chan receive, 4 minutes]:
testing.(*T).Run(0xc001a06e00, {0x2f14d5a?, 0x3bff108?}, 0xc00151a090)
	/usr/local/go/src/testing/testing.go:1859 +0x414
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc001a06e00)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:111 +0x5bb
testing.tRunner(0xc001a06e00, 0xc0005ba880)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2140
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2147 [chan receive, 30 minutes]:
testing.(*testState).waitParallel(0xc00002a1e0)
	/usr/local/go/src/testing/testing.go:1926 +0xaf
testing.(*T).Parallel(0xc001600540)
	/usr/local/go/src/testing/testing.go:1578 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc001600540)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc001600540)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc001600540, 0xc0005ba900)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2140
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2148 [chan receive, 2 minutes]:
testing.(*T).Run(0xc001600a80, {0x2f14d5a?, 0x3bff108?}, 0xc00168c180)
	/usr/local/go/src/testing/testing.go:1859 +0x414
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc001600a80)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:111 +0x5bb
testing.tRunner(0xc001600a80, 0xc0005ba980)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2140
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2149 [chan receive, 2 minutes]:
testing.(*T).Run(0xc001601c00, {0x2f14d5a?, 0x3bff108?}, 0xc00159be90)
	/usr/local/go/src/testing/testing.go:1859 +0x414
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc001601c00)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:111 +0x5bb
testing.tRunner(0xc001601c00, 0xc0005baa00)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2140
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2320 [syscall]:
syscall.Syscall6(0x198af870a38?, 0x10000?, 0x4000?, 0xc001848008?, 0xc00152a000?, 0xc001a0fbf0?, 0x648665?, 0x6f69746172756769?)
	/usr/local/go/src/runtime/syscall_windows.go:463 +0x38
syscall.readFile(0x6d4, {0xc00153269e?, 0x7962, 0x69df1f?}, 0x10000?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1020 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:451
syscall.Read(0xc000836908?, {0xc00153269e?, 0x0?, 0x3130?})
	/usr/local/go/src/syscall/syscall_windows.go:430 +0x2d
internal/poll.(*FD).Read(0xc000836908, {0xc00153269e, 0x7962, 0x7962})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00010cb88, {0xc00153269e?, 0xd1f?, 0xd1f?})
	/usr/local/go/src/os/file.go:124 +0x4f
bytes.(*Buffer).ReadFrom(0xc00151aa50, {0x3c07c00, 0xc00081c0c0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3c07d80, 0xc00151aa50}, {0x3c07c00, 0xc00081c0c0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3c07d80, 0xc00151aa50})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc001a0ff38?, {0x3c07d80?, 0xc00151aa50?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x3c07d80, 0xc00151aa50}, {0x3c07ce0, 0xc00010cb88}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x34
os/exec.(*Cmd).Start.func2(0xc00162c0e0?)
	/usr/local/go/src/os/exec/exec.go:749 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2318
	/usr/local/go/src/os/exec/exec.go:748 +0x9c5

                                                
                                                
goroutine 2319 [syscall, 2 minutes]:
syscall.Syscall6(0x198f4fe2588?, 0x198af870a38?, 0x800?, 0xc000580008?, 0xc0014e1800?, 0xc001927bf0?, 0x648659?, 0x628d30?)
	/usr/local/go/src/runtime/syscall_windows.go:463 +0x38
syscall.readFile(0x6e8, {0xc0014e1a25?, 0x5db, 0x69df1f?}, 0x800?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1020 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:451
syscall.Read(0xc000551b08?, {0xc0014e1a25?, 0x0?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:430 +0x2d
internal/poll.(*FD).Read(0xc000551b08, {0xc0014e1a25, 0x5db, 0x5db})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00010cb68, {0xc0014e1a25?, 0x5e6d3f?, 0x29f5420?})
	/usr/local/go/src/os/file.go:124 +0x4f
bytes.(*Buffer).ReadFrom(0xc00151aa20, {0x3c07c00, 0xc000940100})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3c07d80, 0xc00151aa20}, {0x3c07c00, 0xc000940100}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3c07d80, 0xc00151aa20})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc001927f38?, {0x3c07d80?, 0xc00151aa20?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x3c07d80, 0xc00151aa20}, {0x3c07ce0, 0xc00010cb68}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x34
os/exec.(*Cmd).Start.func2(0xc0017f0310?)
	/usr/local/go/src/os/exec/exec.go:749 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2318
	/usr/local/go/src/os/exec/exec.go:748 +0x9c5

                                                
                                                
goroutine 2292 [chan receive, 22 minutes]:
testing.(*testState).waitParallel(0xc00002a1e0)
	/usr/local/go/src/testing/testing.go:1926 +0xaf
testing.(*T).Parallel(0xc00197ca80)
	/usr/local/go/src/testing/testing.go:1578 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc00197ca80)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00197ca80)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:92 +0x45
testing.tRunner(0xc00197ca80, 0xc0008105c0)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2224
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2293 [chan receive, 22 minutes]:
testing.(*testState).waitParallel(0xc00002a1e0)
	/usr/local/go/src/testing/testing.go:1926 +0xaf
testing.(*T).Parallel(0xc00197ce00)
	/usr/local/go/src/testing/testing.go:1578 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc00197ce00)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00197ce00)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:92 +0x45
testing.tRunner(0xc00197ce00, 0xc000810640)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2224
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2294 [chan receive, 22 minutes]:
testing.(*testState).waitParallel(0xc00002a1e0)
	/usr/local/go/src/testing/testing.go:1926 +0xaf
testing.(*T).Parallel(0xc00197d180)
	/usr/local/go/src/testing/testing.go:1578 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc00197d180)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00197d180)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:92 +0x45
testing.tRunner(0xc00197d180, 0xc000810840)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2224
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2358 [syscall, 4 minutes]:
syscall.Syscall(0xc00154bd00?, 0x0?, 0x73043b?, 0x1000000000000?, 0x1e?)
	/usr/local/go/src/runtime/syscall_windows.go:457 +0x29
syscall.WaitForSingleObject(0x6b8, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1149 +0x5a
os.(*Process).wait(0xc00021d200?)
	/usr/local/go/src/os/exec_windows.go:28 +0xe6
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc00021d200)
	/usr/local/go/src/os/exec/exec.go:922 +0x45
os/exec.(*Cmd).Run(0xc00021d200)
	/usr/local/go/src/os/exec/exec.go:626 +0x2d
k8s.io/minikube/test/integration.Run(0xc0008f6000, 0xc00021d200)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.1(0xc0008f6000)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:112 +0x4b
testing.tRunner(0xc0008f6000, 0xc00151a090)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2146
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2359 [syscall, 4 minutes]:
syscall.Syscall6(0x198f53a7410?, 0x198af870a38?, 0x400?, 0xc000580808?, 0xc0008f8400?, 0xc0013fdbf0?, 0x648659?, 0xc0014adc30?)
	/usr/local/go/src/runtime/syscall_windows.go:463 +0x38
syscall.readFile(0x524, {0xc0008f85ef?, 0x211, 0x69df1f?}, 0x400?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1020 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:451
syscall.Read(0xc0019fc248?, {0xc0008f85ef?, 0x0?, 0xc0014adc38?})
	/usr/local/go/src/syscall/syscall_windows.go:430 +0x2d
internal/poll.(*FD).Read(0xc0019fc248, {0xc0008f85ef, 0x211, 0x211})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000940018, {0xc0008f85ef?, 0x5e6d3f?, 0x29f5420?})
	/usr/local/go/src/os/file.go:124 +0x4f
bytes.(*Buffer).ReadFrom(0xc00151a3f0, {0x3c07c00, 0xc00010c278})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3c07d80, 0xc00151a3f0}, {0x3c07c00, 0xc00010c278}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc001589140?, {0x3c07d80, 0xc00151a3f0})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0x2f12fb3?, {0x3c07d80?, 0xc00151a3f0?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x3c07d80, 0xc00151a3f0}, {0x3c07ce0, 0xc000940018}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x34
os/exec.(*Cmd).Start.func2(0xc000810980?)
	/usr/local/go/src/os/exec/exec.go:749 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2358
	/usr/local/go/src/os/exec/exec.go:748 +0x9c5

                                                
                                                
goroutine 2360 [syscall, 4 minutes]:
syscall.Syscall6(0x198f4f12ed8?, 0x198af870a38?, 0x2000?, 0xc000600008?, 0xc000958000?, 0xc001547bf0?, 0x648659?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:463 +0x38
syscall.readFile(0x46c, {0xc000959b71?, 0x48f, 0x69df1f?}, 0x2000?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1020 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:451
syscall.Read(0xc0019fc6c8?, {0xc000959b71?, 0x0?, 0x9ec?})
	/usr/local/go/src/syscall/syscall_windows.go:430 +0x2d
internal/poll.(*FD).Read(0xc0019fc6c8, {0xc000959b71, 0x48f, 0x48f})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000940038, {0xc000959b71?, 0x5e6d3f?, 0x29f5420?})
	/usr/local/go/src/os/file.go:124 +0x4f
bytes.(*Buffer).ReadFrom(0xc00151a420, {0x3c07c00, 0xc0000c6088})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3c07d80, 0xc00151a420}, {0x3c07c00, 0xc0000c6088}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3c07d80, 0xc00151a420})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0x0?, {0x3c07d80?, 0xc00151a420?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x3c07d80, 0xc00151a420}, {0x3c07ce0, 0xc000940038}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x34
os/exec.(*Cmd).Start.func2(0x0?)
	/usr/local/go/src/os/exec/exec.go:749 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2358
	/usr/local/go/src/os/exec/exec.go:748 +0x9c5

                                                
                                                
goroutine 2361 [select, 4 minutes]:
os/exec.(*Cmd).watchCtx(0xc00021d200, 0xc0017f02a0)
	/usr/local/go/src/os/exec/exec.go:789 +0xb2
created by os/exec.(*Cmd).Start in goroutine 2358
	/usr/local/go/src/os/exec/exec.go:775 +0x989

                                                
                                                
goroutine 2368 [syscall, 2 minutes]:
syscall.Syscall(0xc001509d00?, 0x198f52f0828?, 0x0?, 0x10000056cc5f8?, 0x1e?)
	/usr/local/go/src/runtime/syscall_windows.go:457 +0x29
syscall.WaitForSingleObject(0x3bc, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1149 +0x5a
os.(*Process).wait(0xc000972900?)
	/usr/local/go/src/os/exec_windows.go:28 +0xe6
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc000972900)
	/usr/local/go/src/os/exec/exec.go:922 +0x45
os/exec.(*Cmd).Run(0xc000972900)
	/usr/local/go/src/os/exec/exec.go:626 +0x2d
k8s.io/minikube/test/integration.Run(0xc0008f7500, 0xc000972900)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.1(0xc0008f7500)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:112 +0x4b
testing.tRunner(0xc0008f7500, 0xc00159be90)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2149
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2369 [syscall, 2 minutes]:
syscall.Syscall6(0x198f5333708?, 0x198af870a38?, 0x400?, 0xc000680008?, 0xc0008f3000?, 0xc00150bbf0?, 0x648659?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:463 +0x38
syscall.readFile(0x404, {0xc0008f31ec?, 0x214, 0x69df1f?}, 0x400?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1020 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:451
syscall.Read(0xc00146a488?, {0xc0008f31ec?, 0x0?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:430 +0x2d
internal/poll.(*FD).Read(0xc00146a488, {0xc0008f31ec, 0x214, 0x214})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0000c6208, {0xc0008f31ec?, 0x5e6d3f?, 0x29f5420?})
	/usr/local/go/src/os/file.go:124 +0x4f
bytes.(*Buffer).ReadFrom(0xc00168c0f0, {0x3c07c00, 0xc000940140})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3c07d80, 0xc00168c0f0}, {0x3c07c00, 0xc000940140}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3c07d80, 0xc00168c0f0})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0x0?, {0x3c07d80?, 0xc00168c0f0?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x3c07d80, 0xc00168c0f0}, {0x3c07ce0, 0xc0000c6208}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x34
os/exec.(*Cmd).Start.func2(0x0?)
	/usr/local/go/src/os/exec/exec.go:749 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2368
	/usr/local/go/src/os/exec/exec.go:748 +0x9c5

                                                
                                                
goroutine 2418 [syscall, 2 minutes]:
syscall.Syscall6(0x198f4fa2088?, 0x198af8705a0?, 0x2000?, 0xc00088f808?, 0xc0013ce000?, 0xc001505bf0?, 0x648659?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:463 +0x38
syscall.readFile(0x410, {0xc0013cfb65?, 0x49b, 0x69df1f?}, 0x2000?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1020 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:451
syscall.Read(0xc00146a908?, {0xc0013cfb65?, 0x0?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:430 +0x2d
internal/poll.(*FD).Read(0xc00146a908, {0xc0013cfb65, 0x49b, 0x49b})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0000c6220, {0xc0013cfb65?, 0x5e6d3f?, 0x29f5420?})
	/usr/local/go/src/os/file.go:124 +0x4f
bytes.(*Buffer).ReadFrom(0xc00168c120, {0x3c07c00, 0xc00010cbc8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3c07d80, 0xc00168c120}, {0x3c07c00, 0xc00010cbc8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3c07d80, 0xc00168c120})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0x0?, {0x3c07d80?, 0xc00168c120?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x3c07d80, 0xc00168c120}, {0x3c07ce0, 0xc0000c6220}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x34
os/exec.(*Cmd).Start.func2(0x0?)
	/usr/local/go/src/os/exec/exec.go:749 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2368
	/usr/local/go/src/os/exec/exec.go:748 +0x9c5

                                                
                                                
goroutine 2419 [select, 2 minutes]:
os/exec.(*Cmd).watchCtx(0xc000972900, 0xc001b975e0)
	/usr/local/go/src/os/exec/exec.go:789 +0xb2
created by os/exec.(*Cmd).Start in goroutine 2368
	/usr/local/go/src/os/exec/exec.go:775 +0x989

                                                
                                                
goroutine 2420 [syscall, 2 minutes]:
syscall.Syscall(0xc001511d00?, 0x0?, 0x73043b?, 0x1000000000000?, 0x1e?)
	/usr/local/go/src/runtime/syscall_windows.go:457 +0x29
syscall.WaitForSingleObject(0x35c, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1149 +0x5a
os.(*Process).wait(0xc000972d80?)
	/usr/local/go/src/os/exec_windows.go:28 +0xe6
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc000972d80)
	/usr/local/go/src/os/exec/exec.go:922 +0x45
os/exec.(*Cmd).Run(0xc000972d80)
	/usr/local/go/src/os/exec/exec.go:626 +0x2d
k8s.io/minikube/test/integration.Run(0xc0008f7dc0, 0xc000972d80)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.1(0xc0008f7dc0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:112 +0x4b
testing.tRunner(0xc0008f7dc0, 0xc00168c180)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2148
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2421 [syscall]:
syscall.Syscall6(0x198f4ece6a0?, 0x198af8705a0?, 0x800?, 0xc001796808?, 0xc0005e3000?, 0xc001513bf0?, 0x648659?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:463 +0x38
syscall.readFile(0x354, {0xc0005e3204?, 0x5fc, 0x69df1f?}, 0x800?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1020 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:451
syscall.Read(0xc00146afc8?, {0xc0005e3204?, 0x0?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:430 +0x2d
internal/poll.(*FD).Read(0xc00146afc8, {0xc0005e3204, 0x5fc, 0x5fc})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0000c6250, {0xc0005e3204?, 0x5e6d3f?, 0x29f5420?})
	/usr/local/go/src/os/file.go:124 +0x4f
bytes.(*Buffer).ReadFrom(0xc00168c2a0, {0x3c07c00, 0xc00010cc30})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3c07d80, 0xc00168c2a0}, {0x3c07c00, 0xc00010cc30}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3c07d80, 0xc00168c2a0})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0x0?, {0x3c07d80?, 0xc00168c2a0?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x3c07d80, 0xc00168c2a0}, {0x3c07ce0, 0xc0000c6250}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x34
os/exec.(*Cmd).Start.func2(0x0?)
	/usr/local/go/src/os/exec/exec.go:749 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2420
	/usr/local/go/src/os/exec/exec.go:748 +0x9c5

                                                
                                                
goroutine 2422 [syscall]:
syscall.Syscall6(0x198f4fe01e8?, 0x198af870a38?, 0x2000?, 0xc0000d8808?, 0xc000026000?, 0xc00150dbf0?, 0x648659?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:463 +0x38
syscall.readFile(0x450, {0xc000027bda?, 0x426, 0x69df1f?}, 0x2000?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1020 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:451
syscall.Read(0xc00146b448?, {0xc000027bda?, 0x0?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:430 +0x2d
internal/poll.(*FD).Read(0xc00146b448, {0xc000027bda, 0x426, 0x426})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0000c6270, {0xc000027bda?, 0x5e6d3f?, 0x29f5420?})
	/usr/local/go/src/os/file.go:124 +0x4f
bytes.(*Buffer).ReadFrom(0xc00168c2d0, {0x3c07c00, 0xc000940160})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3c07d80, 0xc00168c2d0}, {0x3c07c00, 0xc000940160}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3c07d80, 0xc00168c2d0})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0x0?, {0x3c07d80?, 0xc00168c2d0?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x3c07d80, 0xc00168c2d0}, {0x3c07ce0, 0xc0000c6270}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x34
os/exec.(*Cmd).Start.func2(0x0?)
	/usr/local/go/src/os/exec/exec.go:749 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2420
	/usr/local/go/src/os/exec/exec.go:748 +0x9c5

                                                
                                                
goroutine 2423 [select, 2 minutes]:
os/exec.(*Cmd).watchCtx(0xc000972d80, 0xc001b97880)
	/usr/local/go/src/os/exec/exec.go:789 +0xb2
created by os/exec.(*Cmd).Start in goroutine 2420
	/usr/local/go/src/os/exec/exec.go:775 +0x989

                                                
                                    

Test pass (165/211)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 15.12
4 TestDownloadOnly/v1.20.0/preload-exists 0.1
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.37
9 TestDownloadOnly/v1.20.0/DeleteAll 0.91
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.67
12 TestDownloadOnly/v1.32.2/json-events 10.88
13 TestDownloadOnly/v1.32.2/preload-exists 0
16 TestDownloadOnly/v1.32.2/kubectl 0
17 TestDownloadOnly/v1.32.2/LogsDuration 0.58
18 TestDownloadOnly/v1.32.2/DeleteAll 0.68
19 TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds 0.91
21 TestBinaryMirror 7.2
22 TestOffline 425.66
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.4
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.4
27 TestAddons/Setup 439.39
29 TestAddons/serial/Volcano 70.13
31 TestAddons/serial/GCPAuth/Namespaces 0.35
32 TestAddons/serial/GCPAuth/FakeCredentials 11.61
35 TestAddons/parallel/Registry 36.29
36 TestAddons/parallel/Ingress 67.06
37 TestAddons/parallel/InspektorGadget 27.22
38 TestAddons/parallel/MetricsServer 23.34
40 TestAddons/parallel/CSI 82.58
41 TestAddons/parallel/Headlamp 59.4
42 TestAddons/parallel/CloudSpanner 21.95
43 TestAddons/parallel/LocalPath 87.13
44 TestAddons/parallel/NvidiaDevicePlugin 14.34
45 TestAddons/parallel/Yakd 27.05
47 TestAddons/StoppedEnableDisable 54.87
48 TestCertOptions 319.87
49 TestCertExpiration 942.23
50 TestDockerFlags 416.14
51 TestForceSystemdFlag 253.95
52 TestForceSystemdEnv 427.45
59 TestErrorSpam/start 17.1
60 TestErrorSpam/status 37.18
61 TestErrorSpam/pause 23.74
62 TestErrorSpam/unpause 23.88
63 TestErrorSpam/stop 62.15
66 TestFunctional/serial/CopySyncFile 0.04
67 TestFunctional/serial/StartWithProxy 226.59
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 128.98
70 TestFunctional/serial/KubeContext 0.14
71 TestFunctional/serial/KubectlGetPods 0.23
74 TestFunctional/serial/CacheCmd/cache/add_remote 26.6
75 TestFunctional/serial/CacheCmd/cache/add_local 10.93
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.27
77 TestFunctional/serial/CacheCmd/cache/list 0.26
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 9.41
79 TestFunctional/serial/CacheCmd/cache/cache_reload 36.58
80 TestFunctional/serial/CacheCmd/cache/delete 0.53
81 TestFunctional/serial/MinikubeKubectlCmd 0.6
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.42
83 TestFunctional/serial/ExtraConfig 124.28
84 TestFunctional/serial/ComponentHealth 0.19
85 TestFunctional/serial/LogsCmd 8.78
86 TestFunctional/serial/LogsFileCmd 10.71
87 TestFunctional/serial/InvalidService 21.49
89 TestFunctional/parallel/ConfigCmd 1.7
93 TestFunctional/parallel/StatusCmd 44.23
97 TestFunctional/parallel/ServiceCmdConnect 37.11
98 TestFunctional/parallel/AddonsCmd 0.77
99 TestFunctional/parallel/PersistentVolumeClaim 41.42
101 TestFunctional/parallel/SSHCmd 22.87
102 TestFunctional/parallel/CpCmd 63.01
103 TestFunctional/parallel/MySQL 72.03
104 TestFunctional/parallel/FileSync 11.15
105 TestFunctional/parallel/CertSync 65.89
109 TestFunctional/parallel/NodeLabels 0.23
111 TestFunctional/parallel/NonActiveRuntimeDisabled 11.45
113 TestFunctional/parallel/License 1.82
114 TestFunctional/parallel/ImageCommands/ImageListShort 8.35
115 TestFunctional/parallel/ImageCommands/ImageListTable 7.99
116 TestFunctional/parallel/ImageCommands/ImageListJson 8
117 TestFunctional/parallel/ImageCommands/ImageListYaml 8.35
118 TestFunctional/parallel/ImageCommands/ImageBuild 29.18
119 TestFunctional/parallel/ImageCommands/Setup 2.27
120 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 19.17
121 TestFunctional/parallel/DockerEnv/powershell 46.73
122 TestFunctional/parallel/UpdateContextCmd/no_changes 2.93
123 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 2.97
124 TestFunctional/parallel/UpdateContextCmd/no_clusters 2.91
125 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 17.69
126 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 18.73
127 TestFunctional/parallel/ImageCommands/ImageSaveToFile 8.51
128 TestFunctional/parallel/ServiceCmd/DeployApp 7.47
129 TestFunctional/parallel/ServiceCmd/List 14.26
130 TestFunctional/parallel/ImageCommands/ImageRemove 17.57
131 TestFunctional/parallel/ProfileCmd/profile_not_create 15.18
132 TestFunctional/parallel/ServiceCmd/JSONOutput 14.71
133 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 17.95
134 TestFunctional/parallel/ProfileCmd/profile_list 14.88
136 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 9.43
137 TestFunctional/parallel/ProfileCmd/profile_json_output 14.88
140 TestFunctional/parallel/Version/short 0.28
141 TestFunctional/parallel/Version/components 8.16
143 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 9.66
144 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
146 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 16.68
152 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
153 TestFunctional/delete_echo-server_images 0.19
154 TestFunctional/delete_my-image_image 0.09
155 TestFunctional/delete_minikube_cached_images 0.09
160 TestMultiControlPlane/serial/StartCluster 718.51
161 TestMultiControlPlane/serial/DeployApp 13.61
163 TestMultiControlPlane/serial/AddWorkerNode 268.72
164 TestMultiControlPlane/serial/NodeLabels 0.2
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 49.8
166 TestMultiControlPlane/serial/CopyFile 650.53
170 TestImageBuild/serial/Setup 197.69
171 TestImageBuild/serial/NormalBuild 10.77
172 TestImageBuild/serial/BuildWithBuildArg 8.92
173 TestImageBuild/serial/BuildWithDockerIgnore 8.27
174 TestImageBuild/serial/BuildWithSpecifiedDockerfile 8.37
178 TestJSONOutput/start/Command 204.44
179 TestJSONOutput/start/Audit 0
181 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
184 TestJSONOutput/pause/Command 7.95
185 TestJSONOutput/pause/Audit 0
187 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/unpause/Command 7.95
191 TestJSONOutput/unpause/Audit 0
193 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/stop/Command 35.11
197 TestJSONOutput/stop/Audit 0
199 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
201 TestErrorJSONOutput 0.95
206 TestMainNoArgs 0.24
207 TestMinikubeProfile 536.88
210 TestMountStart/serial/StartWithMountFirst 158.16
211 TestMountStart/serial/VerifyMountFirst 9.78
212 TestMountStart/serial/StartWithMountSecond 158.44
213 TestMountStart/serial/VerifyMountSecond 9.78
214 TestMountStart/serial/DeleteFirst 31.28
215 TestMountStart/serial/VerifyMountPostDelete 9.6
216 TestMountStart/serial/Stop 26.79
217 TestMountStart/serial/RestartStopped 119.35
218 TestMountStart/serial/VerifyMountPostStop 9.48
221 TestMultiNode/serial/FreshStart2Nodes 446.24
222 TestMultiNode/serial/DeployApp2Nodes 9.54
224 TestMultiNode/serial/AddNode 243.82
225 TestMultiNode/serial/MultiNodeLabels 0.19
226 TestMultiNode/serial/ProfileList 35.94
227 TestMultiNode/serial/CopyFile 365.76
228 TestMultiNode/serial/StopNode 76.96
229 TestMultiNode/serial/StartAfterStop 196.83
236 TestPreload 521.68
237 TestScheduledStopWindows 335.07
242 TestRunningBinaryUpgrade 1095.57
244 TestKubernetesUpgrade 1372.14
247 TestNoKubernetes/serial/StartNoK8sWithVersion 0.39
260 TestStoppedBinaryUpgrade/Setup 0.8
261 TestStoppedBinaryUpgrade/Upgrade 935.98
270 TestPause/serial/Start 485.74
271 TestPause/serial/SecondStartNoReconfiguration 386.1
272 TestStoppedBinaryUpgrade/MinikubeLogs 10.33
273 TestPause/serial/Pause 8.59
274 TestPause/serial/VerifyStatus 13.59
x
+
TestDownloadOnly/v1.20.0/json-events (15.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-942200 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-942200 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv: (15.1227402s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (15.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0317 10:25:54.281266    8940 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0317 10:25:54.376172    8940 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-942200
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-942200: exit status 85 (372.5038ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-942200 | minikube6\jenkins | v1.35.0 | 17 Mar 25 10:25 UTC |          |
	|         | -p download-only-942200        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=hyperv                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/03/17 10:25:39
	Running on machine: minikube6
	Binary: Built with gc go1.24.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0317 10:25:39.269311     956 out.go:345] Setting OutFile to fd 716 ...
	I0317 10:25:39.342290     956 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 10:25:39.342290     956 out.go:358] Setting ErrFile to fd 720...
	I0317 10:25:39.342290     956 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0317 10:25:39.355486     956 root.go:314] Error reading config file at C:\Users\jenkins.minikube6\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube6\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0317 10:25:39.363906     956 out.go:352] Setting JSON to true
	I0317 10:25:39.370179     956 start.go:129] hostinfo: {"hostname":"minikube6","uptime":916,"bootTime":1742206223,"procs":185,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5608 Build 19045.5608","kernelVersion":"10.0.19045.5608 Build 19045.5608","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0317 10:25:39.370179     956 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0317 10:25:39.378152     956 out.go:97] [download-only-942200] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5608 Build 19045.5608
	I0317 10:25:39.378152     956 notify.go:220] Checking for updates...
	W0317 10:25:39.378152     956 preload.go:293] Failed to list preload files: open C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0317 10:25:39.381042     956 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0317 10:25:39.384011     956 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0317 10:25:39.386999     956 out.go:169] MINIKUBE_LOCATION=20535
	I0317 10:25:39.389746     956 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0317 10:25:39.394410     956 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0317 10:25:39.395398     956 driver.go:394] Setting default libvirt URI to qemu:///system
	I0317 10:25:45.023789     956 out.go:97] Using the hyperv driver based on user configuration
	I0317 10:25:45.023789     956 start.go:297] selected driver: hyperv
	I0317 10:25:45.024325     956 start.go:901] validating driver "hyperv" against <nil>
	I0317 10:25:45.024540     956 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0317 10:25:45.082101     956 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0317 10:25:45.084267     956 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0317 10:25:45.084267     956 cni.go:84] Creating CNI manager for ""
	I0317 10:25:45.084746     956 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0317 10:25:45.084970     956 start.go:340] cluster config:
	{Name:download-only-942200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-942200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 10:25:45.086248     956 iso.go:125] acquiring lock: {Name:mk49322cc4182124f5e9cd1631076166a7ff463d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0317 10:25:45.090304     956 out.go:97] Downloading VM boot image ...
	I0317 10:25:45.090304     956 download.go:108] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\iso\amd64\minikube-v1.35.0-amd64.iso
	I0317 10:25:48.797648     956 out.go:97] Starting "download-only-942200" primary control-plane node in "download-only-942200" cluster
	I0317 10:25:48.797648     956 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0317 10:25:48.839268     956 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0317 10:25:48.839268     956 cache.go:56] Caching tarball of preloaded images
	I0317 10:25:48.840691     956 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0317 10:25:48.843583     956 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0317 10:25:48.843656     956 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0317 10:25:48.912254     956 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-942200 host does not exist
	  To start a cluster, run: "minikube start -p download-only-942200"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.91s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.91s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.67s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-942200
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.67s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/json-events (10.88s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-270900 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-270900 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=docker --driver=hyperv: (10.8762486s)
--- PASS: TestDownloadOnly/v1.32.2/json-events (10.88s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/preload-exists
I0317 10:26:07.213653    8940 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
I0317 10:26:07.214370    8940 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/kubectl
--- PASS: TestDownloadOnly/v1.32.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/LogsDuration (0.58s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-270900
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-270900: exit status 85 (578.4057ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-942200 | minikube6\jenkins | v1.35.0 | 17 Mar 25 10:25 UTC |                     |
	|         | -p download-only-942200        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	| delete  | --all                          | minikube             | minikube6\jenkins | v1.35.0 | 17 Mar 25 10:25 UTC | 17 Mar 25 10:25 UTC |
	| delete  | -p download-only-942200        | download-only-942200 | minikube6\jenkins | v1.35.0 | 17 Mar 25 10:25 UTC | 17 Mar 25 10:25 UTC |
	| start   | -o=json --download-only        | download-only-270900 | minikube6\jenkins | v1.35.0 | 17 Mar 25 10:25 UTC |                     |
	|         | -p download-only-270900        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.32.2   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/03/17 10:25:56
	Running on machine: minikube6
	Binary: Built with gc go1.24.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0317 10:25:56.440787    3568 out.go:345] Setting OutFile to fd 884 ...
	I0317 10:25:56.507534    3568 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 10:25:56.507534    3568 out.go:358] Setting ErrFile to fd 888...
	I0317 10:25:56.507534    3568 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 10:25:56.527826    3568 out.go:352] Setting JSON to true
	I0317 10:25:56.530801    3568 start.go:129] hostinfo: {"hostname":"minikube6","uptime":933,"bootTime":1742206223,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5608 Build 19045.5608","kernelVersion":"10.0.19045.5608 Build 19045.5608","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0317 10:25:56.530801    3568 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0317 10:25:56.738861    3568 out.go:97] [download-only-270900] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5608 Build 19045.5608
	I0317 10:25:56.739585    3568 notify.go:220] Checking for updates...
	I0317 10:25:56.742216    3568 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0317 10:25:56.745314    3568 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0317 10:25:56.749193    3568 out.go:169] MINIKUBE_LOCATION=20535
	I0317 10:25:56.752494    3568 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0317 10:25:56.758540    3568 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0317 10:25:56.759493    3568 driver.go:394] Setting default libvirt URI to qemu:///system
	I0317 10:26:02.231151    3568 out.go:97] Using the hyperv driver based on user configuration
	I0317 10:26:02.231151    3568 start.go:297] selected driver: hyperv
	I0317 10:26:02.231151    3568 start.go:901] validating driver "hyperv" against <nil>
	I0317 10:26:02.232047    3568 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0317 10:26:02.285767    3568 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0317 10:26:02.286505    3568 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0317 10:26:02.286505    3568 cni.go:84] Creating CNI manager for ""
	I0317 10:26:02.287110    3568 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0317 10:26:02.287110    3568 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0317 10:26:02.287184    3568 start.go:340] cluster config:
	{Name:download-only-270900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:download-only-270900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 10:26:02.287184    3568 iso.go:125] acquiring lock: {Name:mk49322cc4182124f5e9cd1631076166a7ff463d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0317 10:26:02.293389    3568 out.go:97] Starting "download-only-270900" primary control-plane node in "download-only-270900" cluster
	I0317 10:26:02.293389    3568 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0317 10:26:02.335023    3568 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.2/preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4
	I0317 10:26:02.335086    3568 cache.go:56] Caching tarball of preloaded images
	I0317 10:26:02.335665    3568 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0317 10:26:02.340243    3568 out.go:97] Downloading Kubernetes v1.32.2 preload ...
	I0317 10:26:02.340243    3568 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 ...
	I0317 10:26:02.404901    3568 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.2/preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4?checksum=md5:c3fdd273d8c9002513e1c87be8fe9ffc -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4
	I0317 10:26:05.196052    3568 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 ...
	I0317 10:26:05.196530    3568 preload.go:254] verifying checksum of C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 ...
	I0317 10:26:06.038009    3568 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on docker
	I0317 10:26:06.038981    3568 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\download-only-270900\config.json ...
	I0317 10:26:06.039851    3568 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\download-only-270900\config.json: {Name:mk667f8b7cc391056a3eae91577bd31edc7bb581 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 10:26:06.041142    3568 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0317 10:26:06.041440    3568 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\windows\amd64\v1.32.2/kubectl.exe
	
	
	* The control-plane node download-only-270900 host does not exist
	  To start a cluster, run: "minikube start -p download-only-270900"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.2/LogsDuration (0.58s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAll (0.68s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
--- PASS: TestDownloadOnly/v1.32.2/DeleteAll (0.68s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.91s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-270900
--- PASS: TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.91s)

                                                
                                    
x
+
TestBinaryMirror (7.2s)

                                                
                                                
=== RUN   TestBinaryMirror
I0317 10:26:10.822771    8940 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/windows/amd64/kubectl.exe.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-631100 --alsologtostderr --binary-mirror http://127.0.0.1:52705 --driver=hyperv
aaa_download_only_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-631100 --alsologtostderr --binary-mirror http://127.0.0.1:52705 --driver=hyperv: (6.5140673s)
helpers_test.go:175: Cleaning up "binary-mirror-631100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-631100
--- PASS: TestBinaryMirror (7.20s)

                                                
                                    
x
+
TestOffline (425.66s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-183300 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-183300 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv: (6m24.4721101s)
helpers_test.go:175: Cleaning up "offline-docker-183300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-183300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-183300: (41.1873877s)
--- PASS: TestOffline (425.66s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.4s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-331000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-331000: exit status 85 (403.4075ms)

                                                
                                                
-- stdout --
	* Profile "addons-331000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-331000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.40s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.4s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-331000
addons_test.go:950: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-331000: exit status 85 (398.2798ms)

                                                
                                                
-- stdout --
	* Profile "addons-331000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-331000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.40s)

                                                
                                    
x
+
TestAddons/Setup (439.39s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-331000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=hyperv --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-331000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=hyperv --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (7m19.3851632s)
--- PASS: TestAddons/Setup (439.39s)

                                                
                                    
x
+
TestAddons/serial/Volcano (70.13s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:815: volcano-admission stabilized in 20.3073ms
addons_test.go:823: volcano-controller stabilized in 21.4109ms
addons_test.go:807: volcano-scheduler stabilized in 21.4777ms
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-75fdd99bcf-4f4lw" [96314393-967e-48f4-90ac-df27465a375a] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.0069393s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-75d8f6b5c-bvgxp" [960eb287-7839-4401-82a5-a24d31cb6418] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.0071964s
addons_test.go:837: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-86bdc5c9c-j6wqz" [e045bed5-ad8b-4d12-abb8-a3dbb99af43b] Running
addons_test.go:837: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.0067164s
addons_test.go:842: (dbg) Run:  kubectl --context addons-331000 delete -n volcano-system job volcano-admission-init
addons_test.go:848: (dbg) Run:  kubectl --context addons-331000 create -f testdata\vcjob.yaml
addons_test.go:856: (dbg) Run:  kubectl --context addons-331000 get vcjob -n my-volcano
addons_test.go:874: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [753bebf4-339a-48d0-a58f-3a7a0778f4a4] Pending
helpers_test.go:344: "test-job-nginx-0" [753bebf4-339a-48d0-a58f-3a7a0778f4a4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [753bebf4-339a-48d0-a58f-3a7a0778f4a4] Running
addons_test.go:874: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 26.2712633s
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-331000 addons disable volcano --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-331000 addons disable volcano --alsologtostderr -v=1: (26.855954s)
--- PASS: TestAddons/serial/Volcano (70.13s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.35s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-331000 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-331000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.35s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (11.61s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-331000 create -f testdata\busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-331000 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [710eaceb-55e3-4887-9bf0-38b194962801] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [710eaceb-55e3-4887-9bf0-38b194962801] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.0064431s
addons_test.go:633: (dbg) Run:  kubectl --context addons-331000 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-331000 describe sa gcp-auth-test
addons_test.go:659: (dbg) Run:  kubectl --context addons-331000 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:683: (dbg) Run:  kubectl --context addons-331000 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (11.61s)

                                                
                                    
x
+
TestAddons/parallel/Registry (36.29s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 16.9669ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-2pqf2" [38321876-6e3c-4f2b-8ba9-eecf668c0f35] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.0055061s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-tkm6f" [4fa74ce3-184d-491a-9a80-c49fd4b730c4] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.0075003s
addons_test.go:331: (dbg) Run:  kubectl --context addons-331000 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-331000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-331000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.3940932s)
addons_test.go:350: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-331000 ip
addons_test.go:350: (dbg) Done: out/minikube-windows-amd64.exe -p addons-331000 ip: (2.6434388s)
2025/03/17 10:35:49 [DEBUG] GET http://172.25.16.49:5000
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-331000 addons disable registry --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-331000 addons disable registry --alsologtostderr -v=1: (16.9801639s)
--- PASS: TestAddons/parallel/Registry (36.29s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (67.06s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-331000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-331000 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-331000 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [c1dadcb7-23e1-4cf7-81bc-b895c7beb155] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [c1dadcb7-23e1-4cf7-81bc-b895c7beb155] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 14.0047163s
I0317 10:37:11.748871    8940 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-331000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe -p addons-331000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (9.9513259s)
addons_test.go:286: (dbg) Run:  kubectl --context addons-331000 replace --force -f testdata\ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-331000 ip
addons_test.go:291: (dbg) Done: out/minikube-windows-amd64.exe -p addons-331000 ip: (2.5326733s)
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 172.25.16.49
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-331000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-331000 addons disable ingress-dns --alsologtostderr -v=1: (15.9755976s)
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-331000 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-331000 addons disable ingress --alsologtostderr -v=1: (22.1781937s)
--- PASS: TestAddons/parallel/Ingress (67.06s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (27.22s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-b4m2d" [ffa2933b-691a-4244-8fc8-6041843893a7] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.0117779s
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-331000 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-331000 addons disable inspektor-gadget --alsologtostderr -v=1: (21.201745s)
--- PASS: TestAddons/parallel/InspektorGadget (27.22s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (23.34s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 7.7181ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-r8gmv" [1f9632d9-7246-42c8-bd9f-f092aa63e9fe] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.0072032s
addons_test.go:402: (dbg) Run:  kubectl --context addons-331000 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-331000 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-331000 addons disable metrics-server --alsologtostderr -v=1: (17.0659016s)
--- PASS: TestAddons/parallel/MetricsServer (23.34s)

                                                
                                    
x
+
TestAddons/parallel/CSI (82.58s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0317 10:36:06.435203    8940 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0317 10:36:06.444093    8940 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0317 10:36:06.444093    8940 kapi.go:107] duration metric: took 8.97ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 8.97ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-331000 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-331000 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [aedd1b63-531e-44a5-b10c-79debe364666] Pending
helpers_test.go:344: "task-pv-pod" [aedd1b63-531e-44a5-b10c-79debe364666] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [aedd1b63-531e-44a5-b10c-79debe364666] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.0069254s
addons_test.go:511: (dbg) Run:  kubectl --context addons-331000 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-331000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-331000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-331000 delete pod task-pv-pod
addons_test.go:521: (dbg) Done: kubectl --context addons-331000 delete pod task-pv-pod: (2.220116s)
addons_test.go:527: (dbg) Run:  kubectl --context addons-331000 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-331000 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-331000 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [d599bc8a-ee1a-459f-8a0b-b32b3e93a1f7] Pending
helpers_test.go:344: "task-pv-pod-restore" [d599bc8a-ee1a-459f-8a0b-b32b3e93a1f7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [d599bc8a-ee1a-459f-8a0b-b32b3e93a1f7] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.0073748s
addons_test.go:553: (dbg) Run:  kubectl --context addons-331000 delete pod task-pv-pod-restore
addons_test.go:553: (dbg) Done: kubectl --context addons-331000 delete pod task-pv-pod-restore: (1.6644241s)
addons_test.go:557: (dbg) Run:  kubectl --context addons-331000 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-331000 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-331000 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-331000 addons disable volumesnapshots --alsologtostderr -v=1: (16.1367494s)
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-331000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-331000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (22.1774912s)
--- PASS: TestAddons/parallel/CSI (82.58s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (59.4s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-331000 --alsologtostderr -v=1
addons_test.go:747: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-331000 --alsologtostderr -v=1: (16.9986192s)
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5d4b5d7bd6-gbll9" [81a94beb-5728-4c68-94ac-551933bf2b3b] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-gbll9" [81a94beb-5728-4c68-94ac-551933bf2b3b] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 21.017121s
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-331000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-331000 addons disable headlamp --alsologtostderr -v=1: (21.3837065s)
--- PASS: TestAddons/parallel/Headlamp (59.40s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (21.95s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-754dc876cd-mbctj" [03b591dc-80b9-40d7-8672-d752511f42bf] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.0102723s
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-331000 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-331000 addons disable cloud-spanner --alsologtostderr -v=1: (15.9239398s)
--- PASS: TestAddons/parallel/CloudSpanner (21.95s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (87.13s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-331000 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-331000 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [85c0b028-7886-418a-bb6a-90750bd745a5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [85c0b028-7886-418a-bb6a-90750bd745a5] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [85c0b028-7886-418a-bb6a-90750bd745a5] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.005664s
addons_test.go:906: (dbg) Run:  kubectl --context addons-331000 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-331000 ssh "cat /opt/local-path-provisioner/pvc-3483d0b5-65f6-4d8e-8206-6361d89f3461_default_test-pvc/file1"
addons_test.go:915: (dbg) Done: out/minikube-windows-amd64.exe -p addons-331000 ssh "cat /opt/local-path-provisioner/pvc-3483d0b5-65f6-4d8e-8206-6361d89f3461_default_test-pvc/file1": (10.5748656s)
addons_test.go:927: (dbg) Run:  kubectl --context addons-331000 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-331000 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-331000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-331000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (1m2.7543485s)
--- PASS: TestAddons/parallel/LocalPath (87.13s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (14.34s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-4cwl8" [c07d9641-0726-4711-a0f7-0fcb01c151bd] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.0129653s
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-331000 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-331000 addons disable nvidia-device-plugin --alsologtostderr -v=1: (8.3176637s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (14.34s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (27.05s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-xkswf" [8dc97891-1432-4244-ab72-5e1ce51a7050] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.0068365s
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-331000 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-331000 addons disable yakd --alsologtostderr -v=1: (21.0350891s)
--- PASS: TestAddons/parallel/Yakd (27.05s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (54.87s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-331000
addons_test.go:170: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-331000: (41.5853243s)
addons_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-331000
addons_test.go:174: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p addons-331000: (5.071601s)
addons_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-331000
addons_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe addons disable dashboard -p addons-331000: (4.8578099s)
addons_test.go:183: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-331000
addons_test.go:183: (dbg) Done: out/minikube-windows-amd64.exe addons disable gvisor -p addons-331000: (3.355154s)
--- PASS: TestAddons/StoppedEnableDisable (54.87s)

                                                
                                    
x
+
TestCertOptions (319.87s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-493200 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-493200 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv: (4m10.0336681s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-493200 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Done: out/minikube-windows-amd64.exe -p cert-options-493200 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": (10.2204817s)
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-493200 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-493200 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Done: out/minikube-windows-amd64.exe ssh -p cert-options-493200 -- "sudo cat /etc/kubernetes/admin.conf": (11.5701244s)
helpers_test.go:175: Cleaning up "cert-options-493200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-493200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-493200: (47.9086799s)
--- PASS: TestCertOptions (319.87s)

                                                
                                    
x
+
TestCertExpiration (942.23s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-735200 --memory=2048 --cert-expiration=3m --driver=hyperv
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-735200 --memory=2048 --cert-expiration=3m --driver=hyperv: (6m51.738789s)
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-735200 --memory=2048 --cert-expiration=8760h --driver=hyperv
cert_options_test.go:131: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-735200 --memory=2048 --cert-expiration=8760h --driver=hyperv: (5m7.9503831s)
helpers_test.go:175: Cleaning up "cert-expiration-735200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-735200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-735200: (42.5388987s)
--- PASS: TestCertExpiration (942.23s)

                                                
                                    
x
+
TestDockerFlags (416.14s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-664100 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv
E0317 13:16:12.752558    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-758100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
docker_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-664100 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv: (5m54.3182345s)
docker_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-664100 ssh "sudo systemctl show docker --property=Environment --no-pager"
E0317 13:21:12.755572    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-758100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
docker_test.go:56: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-664100 ssh "sudo systemctl show docker --property=Environment --no-pager": (10.5484272s)
docker_test.go:67: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-664100 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-664100 ssh "sudo systemctl show docker --property=ExecStart --no-pager": (10.4771502s)
helpers_test.go:175: Cleaning up "docker-flags-664100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-664100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-664100: (40.7908631s)
--- PASS: TestDockerFlags (416.14s)

                                                
                                    
x
+
TestForceSystemdFlag (253.95s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-436900 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv
docker_test.go:91: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-436900 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv: (3m24.7320975s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-436900 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-436900 ssh "docker info --format {{.CgroupDriver}}": (10.2021642s)
helpers_test.go:175: Cleaning up "force-systemd-flag-436900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-436900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-436900: (39.0111114s)
--- PASS: TestForceSystemdFlag (253.95s)

                                                
                                    
x
+
TestForceSystemdEnv (427.45s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-265000 --memory=2048 --alsologtostderr -v=5 --driver=hyperv
E0317 12:56:12.738959    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-758100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
docker_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-265000 --memory=2048 --alsologtostderr -v=5 --driver=hyperv: (6m16.4658381s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-265000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-265000 ssh "docker info --format {{.CgroupDriver}}": (10.4508986s)
helpers_test.go:175: Cleaning up "force-systemd-env-265000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-265000
E0317 13:02:35.838218    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-758100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-265000: (40.5314695s)
--- PASS: TestForceSystemdEnv (427.45s)

                                                
                                    
x
+
TestErrorSpam/start (17.1s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-647700 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-647700 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-647700 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-647700 start --dry-run: (5.6463614s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-647700 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-647700 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-647700 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-647700 start --dry-run: (5.7628815s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-647700 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-647700 start --dry-run
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-647700 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-647700 start --dry-run: (5.6845233s)
--- PASS: TestErrorSpam/start (17.10s)

                                                
                                    
x
+
TestErrorSpam/status (37.18s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-647700 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-647700 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-647700 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-647700 status: (12.7200137s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-647700 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-647700 status
E0317 10:43:37.814279    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-331000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0317 10:43:37.821118    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-331000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0317 10:43:37.833606    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-331000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0317 10:43:37.856090    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-331000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0317 10:43:37.898813    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-331000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0317 10:43:37.981359    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-331000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0317 10:43:38.143101    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-331000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0317 10:43:38.465113    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-331000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0317 10:43:39.107002    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-331000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0317 10:43:40.389497    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-331000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0317 10:43:42.951425    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-331000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-647700 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-647700 status: (12.1766829s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-647700 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-647700 status
E0317 10:43:48.074458    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-331000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0317 10:43:58.316991    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-331000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-647700 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-647700 status: (12.2698774s)
--- PASS: TestErrorSpam/status (37.18s)

                                                
                                    
x
+
TestErrorSpam/pause (23.74s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-647700 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-647700 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-647700 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-647700 pause: (8.1797471s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-647700 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-647700 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-647700 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-647700 pause: (7.8147533s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-647700 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-647700 pause
E0317 10:44:18.800089    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-331000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-647700 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-647700 pause: (7.7461883s)
--- PASS: TestErrorSpam/pause (23.74s)

                                                
                                    
x
+
TestErrorSpam/unpause (23.88s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-647700 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-647700 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-647700 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-647700 unpause: (8.0695333s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-647700 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-647700 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-647700 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-647700 unpause: (7.9473378s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-647700 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-647700 unpause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-647700 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-647700 unpause: (7.8607411s)
--- PASS: TestErrorSpam/unpause (23.88s)

                                                
                                    
x
+
TestErrorSpam/stop (62.15s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-647700 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-647700 stop
E0317 10:44:59.762460    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-331000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-647700 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-647700 stop: (40.1541829s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-647700 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-647700 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-647700 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-647700 stop: (11.1827732s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-647700 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-647700 stop
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-647700 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-647700 stop: (10.8066236s)
--- PASS: TestErrorSpam/stop (62.15s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1872: local sync path: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\8940\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (226.59s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2251: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-758100 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv
E0317 10:46:21.685009    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-331000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0317 10:48:37.815985    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-331000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0317 10:49:05.528275    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-331000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
functional_test.go:2251: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-758100 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv: (3m46.5770121s)
--- PASS: TestFunctional/serial/StartWithProxy (226.59s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (128.98s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0317 10:49:52.273377    8940 config.go:182] Loaded profile config "functional-758100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
functional_test.go:676: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-758100 --alsologtostderr -v=8
functional_test.go:676: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-758100 --alsologtostderr -v=8: (2m8.9731734s)
functional_test.go:680: soft start took 2m8.9757541s for "functional-758100" cluster.
I0317 10:52:01.249283    8940 config.go:182] Loaded profile config "functional-758100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestFunctional/serial/SoftStart (128.98s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:698: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.14s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:713: (dbg) Run:  kubectl --context functional-758100 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (26.6s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1066: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 cache add registry.k8s.io/pause:3.1
functional_test.go:1066: (dbg) Done: out/minikube-windows-amd64.exe -p functional-758100 cache add registry.k8s.io/pause:3.1: (9.1278331s)
functional_test.go:1066: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 cache add registry.k8s.io/pause:3.3
functional_test.go:1066: (dbg) Done: out/minikube-windows-amd64.exe -p functional-758100 cache add registry.k8s.io/pause:3.3: (8.6599862s)
functional_test.go:1066: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 cache add registry.k8s.io/pause:latest
functional_test.go:1066: (dbg) Done: out/minikube-windows-amd64.exe -p functional-758100 cache add registry.k8s.io/pause:latest: (8.8097367s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (26.60s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (10.93s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1094: (dbg) Run:  docker build -t minikube-local-cache-test:functional-758100 C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local2882940111\001
functional_test.go:1094: (dbg) Done: docker build -t minikube-local-cache-test:functional-758100 C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local2882940111\001: (1.9432562s)
functional_test.go:1106: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 cache add minikube-local-cache-test:functional-758100
functional_test.go:1106: (dbg) Done: out/minikube-windows-amd64.exe -p functional-758100 cache add minikube-local-cache-test:functional-758100: (8.5764646s)
functional_test.go:1111: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 cache delete minikube-local-cache-test:functional-758100
functional_test.go:1100: (dbg) Run:  docker rmi minikube-local-cache-test:functional-758100
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (10.93s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1119: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1127: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (9.41s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1141: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 ssh sudo crictl images
functional_test.go:1141: (dbg) Done: out/minikube-windows-amd64.exe -p functional-758100 ssh sudo crictl images: (9.4058675s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (9.41s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (36.58s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1164: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1164: (dbg) Done: out/minikube-windows-amd64.exe -p functional-758100 ssh sudo docker rmi registry.k8s.io/pause:latest: (9.4219963s)
functional_test.go:1170: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-758100 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (9.542835s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1175: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 cache reload
functional_test.go:1175: (dbg) Done: out/minikube-windows-amd64.exe -p functional-758100 cache reload: (8.2041559s)
functional_test.go:1180: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1180: (dbg) Done: out/minikube-windows-amd64.exe -p functional-758100 ssh sudo crictl inspecti registry.k8s.io/pause:latest: (9.409775s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (36.58s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.53s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1189: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1189: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.53s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.6s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:733: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 kubectl -- --context functional-758100 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.60s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.42s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:758: (dbg) Run:  out\kubectl.exe --context functional-758100 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.42s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (124.28s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:774: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-758100 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0317 10:53:37.817584    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-331000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
functional_test.go:774: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-758100 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (2m4.2789948s)
functional_test.go:778: restart took 2m4.2797265s for "functional-758100" cluster.
I0317 10:55:31.497103    8940 config.go:182] Loaded profile config "functional-758100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestFunctional/serial/ExtraConfig (124.28s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:827: (dbg) Run:  kubectl --context functional-758100 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:842: etcd phase: Running
functional_test.go:852: etcd status: Ready
functional_test.go:842: kube-apiserver phase: Running
functional_test.go:852: kube-apiserver status: Ready
functional_test.go:842: kube-controller-manager phase: Running
functional_test.go:852: kube-controller-manager status: Ready
functional_test.go:842: kube-scheduler phase: Running
functional_test.go:852: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.19s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (8.78s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1253: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 logs
functional_test.go:1253: (dbg) Done: out/minikube-windows-amd64.exe -p functional-758100 logs: (8.7811993s)
--- PASS: TestFunctional/serial/LogsCmd (8.78s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (10.71s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1267: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 logs --file C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialLogsFileCmd3827964570\001\logs.txt
functional_test.go:1267: (dbg) Done: out/minikube-windows-amd64.exe -p functional-758100 logs --file C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialLogsFileCmd3827964570\001\logs.txt: (10.7010949s)
--- PASS: TestFunctional/serial/LogsFileCmd (10.71s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (21.49s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2338: (dbg) Run:  kubectl --context functional-758100 apply -f testdata\invalidsvc.yaml
functional_test.go:2352: (dbg) Run:  out/minikube-windows-amd64.exe service invalid-svc -p functional-758100
functional_test.go:2352: (dbg) Non-zero exit: out/minikube-windows-amd64.exe service invalid-svc -p functional-758100: exit status 115 (17.0050586s)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://172.25.21.21:30273 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube_service_8fb87d8e79e761d215f3221b4a4d8a6300edfb06_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2344: (dbg) Run:  kubectl --context functional-758100 delete -f testdata\invalidsvc.yaml
functional_test.go:2344: (dbg) Done: kubectl --context functional-758100 delete -f testdata\invalidsvc.yaml: (1.07451s)
--- PASS: TestFunctional/serial/InvalidService (21.49s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1216: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-758100 config get cpus: exit status 14 (255.3839ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1216: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 config set cpus 2
functional_test.go:1216: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 config get cpus
functional_test.go:1216: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-758100 config get cpus: exit status 14 (252.5465ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (44.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:871: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 status
functional_test.go:871: (dbg) Done: out/minikube-windows-amd64.exe -p functional-758100 status: (15.8834572s)
functional_test.go:877: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:877: (dbg) Done: out/minikube-windows-amd64.exe -p functional-758100 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: (13.8834584s)
functional_test.go:889: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 status -o json
functional_test.go:889: (dbg) Done: out/minikube-windows-amd64.exe -p functional-758100 status -o json: (14.4652973s)
--- PASS: TestFunctional/parallel/StatusCmd (44.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (37.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1646: (dbg) Run:  kubectl --context functional-758100 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1652: (dbg) Run:  kubectl --context functional-758100 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-nvk87" [e55e4c0f-25cf-495c-ba53-8650cd9c4b51] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-nvk87" [e55e4c0f-25cf-495c-ba53-8650cd9c4b51] Running
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 17.0059071s
functional_test.go:1666: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 service hello-node-connect --url
functional_test.go:1666: (dbg) Done: out/minikube-windows-amd64.exe -p functional-758100 service hello-node-connect --url: (19.6236428s)
functional_test.go:1672: found endpoint for hello-node-connect: http://172.25.21.21:30812
functional_test.go:1692: http://172.25.21.21:30812: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-nvk87

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://172.25.21.21:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=172.25.21.21:30812
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (37.11s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1707: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 addons list
functional_test.go:1719: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (41.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [432f4e7d-597f-4996-ac3f-ac798354b63e] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.0064275s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-758100 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-758100 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-758100 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-758100 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [1841b148-cec5-4ee5-b70e-15fa4068114e] Pending
helpers_test.go:344: "sp-pod" [1841b148-cec5-4ee5-b70e-15fa4068114e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [1841b148-cec5-4ee5-b70e-15fa4068114e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 25.0119668s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-758100 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-758100 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-758100 delete -f testdata/storage-provisioner/pod.yaml: (1.025019s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-758100 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [aece5312-f3d7-47ff-8846-c430c50bbe7b] Pending
helpers_test.go:344: "sp-pod" [aece5312-f3d7-47ff-8846-c430c50bbe7b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [aece5312-f3d7-47ff-8846-c430c50bbe7b] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.007768s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-758100 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (41.42s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (22.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1742: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 ssh "echo hello"
functional_test.go:1742: (dbg) Done: out/minikube-windows-amd64.exe -p functional-758100 ssh "echo hello": (11.0609103s)
functional_test.go:1759: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 ssh "cat /etc/hostname"
functional_test.go:1759: (dbg) Done: out/minikube-windows-amd64.exe -p functional-758100 ssh "cat /etc/hostname": (11.8039922s)
--- PASS: TestFunctional/parallel/SSHCmd (22.87s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (63.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-758100 cp testdata\cp-test.txt /home/docker/cp-test.txt: (8.9317322s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 ssh -n functional-758100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-758100 ssh -n functional-758100 "sudo cat /home/docker/cp-test.txt": (11.3353842s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 cp functional-758100:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalparallelCpCmd1086056701\001\cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-758100 cp functional-758100:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalparallelCpCmd1086056701\001\cp-test.txt: (11.5179731s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 ssh -n functional-758100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-758100 ssh -n functional-758100 "sudo cat /home/docker/cp-test.txt": (11.3794374s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-758100 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt: (8.6284344s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 ssh -n functional-758100 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-758100 ssh -n functional-758100 "sudo cat /tmp/does/not/exist/cp-test.txt": (11.2107993s)
--- PASS: TestFunctional/parallel/CpCmd (63.01s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (72.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1810: (dbg) Run:  kubectl --context functional-758100 replace --force -f testdata\mysql.yaml
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-8d4b5" [355de57d-24c0-426f-b4da-ce9c80849a0f] Pending
helpers_test.go:344: "mysql-58ccfd96bb-8d4b5" [355de57d-24c0-426f-b4da-ce9c80849a0f] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-58ccfd96bb-8d4b5" [355de57d-24c0-426f-b4da-ce9c80849a0f] Running
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 52.0083732s
functional_test.go:1824: (dbg) Run:  kubectl --context functional-758100 exec mysql-58ccfd96bb-8d4b5 -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-758100 exec mysql-58ccfd96bb-8d4b5 -- mysql -ppassword -e "show databases;": exit status 1 (370.7539ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0317 10:58:56.981446    8940 retry.go:31] will retry after 1.169097775s: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-758100 exec mysql-58ccfd96bb-8d4b5 -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-758100 exec mysql-58ccfd96bb-8d4b5 -- mysql -ppassword -e "show databases;": exit status 1 (541.5044ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0317 10:58:58.701661    8940 retry.go:31] will retry after 2.10916679s: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-758100 exec mysql-58ccfd96bb-8d4b5 -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-758100 exec mysql-58ccfd96bb-8d4b5 -- mysql -ppassword -e "show databases;": exit status 1 (336.4065ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0317 10:59:01.159970    8940 retry.go:31] will retry after 2.168038539s: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-758100 exec mysql-58ccfd96bb-8d4b5 -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-758100 exec mysql-58ccfd96bb-8d4b5 -- mysql -ppassword -e "show databases;": exit status 1 (310.7733ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0317 10:59:03.650031    8940 retry.go:31] will retry after 5.061606638s: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-758100 exec mysql-58ccfd96bb-8d4b5 -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-758100 exec mysql-58ccfd96bb-8d4b5 -- mysql -ppassword -e "show databases;": exit status 1 (395.5618ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0317 10:59:09.119908    8940 retry.go:31] will retry after 6.572686263s: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-758100 exec mysql-58ccfd96bb-8d4b5 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (72.03s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (11.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1946: Checking for existence of /etc/test/nested/copy/8940/hosts within VM
functional_test.go:1948: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 ssh "sudo cat /etc/test/nested/copy/8940/hosts"
functional_test.go:1948: (dbg) Done: out/minikube-windows-amd64.exe -p functional-758100 ssh "sudo cat /etc/test/nested/copy/8940/hosts": (11.1475029s)
functional_test.go:1953: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (11.15s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (65.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1989: Checking for existence of /etc/ssl/certs/8940.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 ssh "sudo cat /etc/ssl/certs/8940.pem"
functional_test.go:1990: (dbg) Done: out/minikube-windows-amd64.exe -p functional-758100 ssh "sudo cat /etc/ssl/certs/8940.pem": (11.361002s)
functional_test.go:1989: Checking for existence of /usr/share/ca-certificates/8940.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 ssh "sudo cat /usr/share/ca-certificates/8940.pem"
functional_test.go:1990: (dbg) Done: out/minikube-windows-amd64.exe -p functional-758100 ssh "sudo cat /usr/share/ca-certificates/8940.pem": (11.0071943s)
functional_test.go:1989: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1990: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1990: (dbg) Done: out/minikube-windows-amd64.exe -p functional-758100 ssh "sudo cat /etc/ssl/certs/51391683.0": (11.1603801s)
functional_test.go:2016: Checking for existence of /etc/ssl/certs/89402.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 ssh "sudo cat /etc/ssl/certs/89402.pem"
functional_test.go:2017: (dbg) Done: out/minikube-windows-amd64.exe -p functional-758100 ssh "sudo cat /etc/ssl/certs/89402.pem": (10.6738992s)
functional_test.go:2016: Checking for existence of /usr/share/ca-certificates/89402.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 ssh "sudo cat /usr/share/ca-certificates/89402.pem"
functional_test.go:2017: (dbg) Done: out/minikube-windows-amd64.exe -p functional-758100 ssh "sudo cat /usr/share/ca-certificates/89402.pem": (10.5517285s)
functional_test.go:2016: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2017: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:2017: (dbg) Done: out/minikube-windows-amd64.exe -p functional-758100 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (11.1316496s)
--- PASS: TestFunctional/parallel/CertSync (65.89s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:236: (dbg) Run:  kubectl --context functional-758100 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (11.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2044: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 ssh "sudo systemctl is-active crio"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-758100 ssh "sudo systemctl is-active crio": exit status 1 (11.4543351s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (11.45s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2305: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2305: (dbg) Done: out/minikube-windows-amd64.exe license: (1.8011202s)
--- PASS: TestFunctional/parallel/License (1.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (8.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:278: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 image ls --format short --alsologtostderr
functional_test.go:278: (dbg) Done: out/minikube-windows-amd64.exe -p functional-758100 image ls --format short --alsologtostderr: (8.3450469s)
functional_test.go:283: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-758100 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.2
registry.k8s.io/kube-proxy:v1.32.2
registry.k8s.io/kube-controller-manager:v1.32.2
registry.k8s.io/kube-apiserver:v1.32.2
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-758100
docker.io/kicbase/echo-server:functional-758100
functional_test.go:286: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-758100 image ls --format short --alsologtostderr:
I0317 10:59:19.133728    7856 out.go:345] Setting OutFile to fd 1572 ...
I0317 10:59:19.213853    7856 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0317 10:59:19.213853    7856 out.go:358] Setting ErrFile to fd 1484...
I0317 10:59:19.213853    7856 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0317 10:59:19.229899    7856 config.go:182] Loaded profile config "functional-758100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0317 10:59:19.230426    7856 config.go:182] Loaded profile config "functional-758100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0317 10:59:19.231886    7856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-758100 ).state
I0317 10:59:21.769149    7856 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0317 10:59:21.769149    7856 main.go:141] libmachine: [stderr =====>] : 
I0317 10:59:21.781150    7856 ssh_runner.go:195] Run: systemctl --version
I0317 10:59:21.781150    7856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-758100 ).state
I0317 10:59:24.274483    7856 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0317 10:59:24.274483    7856 main.go:141] libmachine: [stderr =====>] : 
I0317 10:59:24.274982    7856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-758100 ).networkadapters[0]).ipaddresses[0]
I0317 10:59:27.149534    7856 main.go:141] libmachine: [stdout =====>] : 172.25.21.21

                                                
                                                
I0317 10:59:27.149534    7856 main.go:141] libmachine: [stderr =====>] : 
I0317 10:59:27.150668    7856 sshutil.go:53] new ssh client: &{IP:172.25.21.21 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-758100\id_rsa Username:docker}
I0317 10:59:27.259697    7856 ssh_runner.go:235] Completed: systemctl --version: (5.4783666s)
I0317 10:59:27.269811    7856 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (8.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (7.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:278: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 image ls --format table --alsologtostderr
functional_test.go:278: (dbg) Done: out/minikube-windows-amd64.exe -p functional-758100 image ls --format table --alsologtostderr: (7.987438s)
functional_test.go:283: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-758100 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/kicbase/echo-server               | functional-758100 | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/kube-apiserver              | v1.32.2           | 85b7a174738ba | 97MB   |
| docker.io/library/nginx                     | latest            | b52e0b094bc0e | 192MB  |
| registry.k8s.io/etcd                        | 3.5.16-0          | a9e7e6b294baf | 150MB  |
| registry.k8s.io/kube-proxy                  | v1.32.2           | f1332858868e1 | 94MB   |
| registry.k8s.io/kube-controller-manager     | v1.32.2           | b6a454c5a800d | 89.7MB |
| registry.k8s.io/kube-scheduler              | v1.32.2           | d8e673e7c9983 | 69.6MB |
| docker.io/library/nginx                     | alpine            | 1ff4bb4faebcf | 47.9MB |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/coredns/coredns             | v1.11.3           | c69fa2e9cbf5f | 61.8MB |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/library/minikube-local-cache-test | functional-758100 | dfc02ad17f471 | 30B    |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:286: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-758100 image ls --format table --alsologtostderr:
I0317 10:59:27.959424   11492 out.go:345] Setting OutFile to fd 1408 ...
I0317 10:59:28.032925   11492 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0317 10:59:28.032925   11492 out.go:358] Setting ErrFile to fd 1804...
I0317 10:59:28.032925   11492 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0317 10:59:28.050934   11492 config.go:182] Loaded profile config "functional-758100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0317 10:59:28.051927   11492 config.go:182] Loaded profile config "functional-758100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0317 10:59:28.052937   11492 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-758100 ).state
I0317 10:59:30.448181   11492 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0317 10:59:30.448181   11492 main.go:141] libmachine: [stderr =====>] : 
I0317 10:59:30.464042   11492 ssh_runner.go:195] Run: systemctl --version
I0317 10:59:30.464042   11492 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-758100 ).state
I0317 10:59:32.819786   11492 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0317 10:59:32.819873   11492 main.go:141] libmachine: [stderr =====>] : 
I0317 10:59:32.820010   11492 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-758100 ).networkadapters[0]).ipaddresses[0]
I0317 10:59:35.620382   11492 main.go:141] libmachine: [stdout =====>] : 172.25.21.21

                                                
                                                
I0317 10:59:35.620382   11492 main.go:141] libmachine: [stderr =====>] : 
I0317 10:59:35.620382   11492 sshutil.go:53] new ssh client: &{IP:172.25.21.21 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-758100\id_rsa Username:docker}
I0317 10:59:35.732427   11492 ssh_runner.go:235] Completed: systemctl --version: (5.2683504s)
I0317 10:59:35.749497   11492 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (7.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:278: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 image ls --format json --alsologtostderr
functional_test.go:278: (dbg) Done: out/minikube-windows-amd64.exe -p functional-758100 image ls --format json --alsologtostderr: (8.0038786s)
functional_test.go:283: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-758100 image ls --format json --alsologtostderr:
[{"id":"85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.2"],"size":"97000000"},{"id":"b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.2"],"size":"89700000"},{"id":"f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.32.2"],"size":"94000000"},{"id":"1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47900000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"dfc02ad17f47198511ff0fbc2b8d1ba216cb07f64bf52683e8c4dd62
5c1b9854","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-758100"],"size":"30"},{"id":"b52e0b094bc0e26c9eddc9e4ab7a64ce0033c3360d8b7ad4ff4132c4e03e8f7b","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"192000000"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61800000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/paus
e:latest"],"size":"240000"},{"id":"d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.2"],"size":"69600000"},{"id":"a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"150000000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-758100"],"size":"4940000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"}]
functional_test.go:286: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-758100 image ls --format json --alsologtostderr:
I0317 10:59:27.449106    1252 out.go:345] Setting OutFile to fd 1460 ...
I0317 10:59:27.530678    1252 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0317 10:59:27.530764    1252 out.go:358] Setting ErrFile to fd 1300...
I0317 10:59:27.530764    1252 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0317 10:59:27.547044    1252 config.go:182] Loaded profile config "functional-758100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0317 10:59:27.548044    1252 config.go:182] Loaded profile config "functional-758100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0317 10:59:27.549044    1252 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-758100 ).state
I0317 10:59:29.955286    1252 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0317 10:59:29.955286    1252 main.go:141] libmachine: [stderr =====>] : 
I0317 10:59:29.969179    1252 ssh_runner.go:195] Run: systemctl --version
I0317 10:59:29.969179    1252 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-758100 ).state
I0317 10:59:32.356327    1252 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0317 10:59:32.356327    1252 main.go:141] libmachine: [stderr =====>] : 
I0317 10:59:32.356327    1252 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-758100 ).networkadapters[0]).ipaddresses[0]
I0317 10:59:35.141248    1252 main.go:141] libmachine: [stdout =====>] : 172.25.21.21

                                                
                                                
I0317 10:59:35.141248    1252 main.go:141] libmachine: [stderr =====>] : 
I0317 10:59:35.141713    1252 sshutil.go:53] new ssh client: &{IP:172.25.21.21 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-758100\id_rsa Username:docker}
I0317 10:59:35.259703    1252 ssh_runner.go:235] Completed: systemctl --version: (5.2904888s)
I0317 10:59:35.271926    1252 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (8.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (8.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:278: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 image ls --format yaml --alsologtostderr
functional_test.go:278: (dbg) Done: out/minikube-windows-amd64.exe -p functional-758100 image ls --format yaml --alsologtostderr: (8.3489717s)
functional_test.go:283: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-758100 image ls --format yaml --alsologtostderr:
- id: 1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47900000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.2
size: "69600000"
- id: b52e0b094bc0e26c9eddc9e4ab7a64ce0033c3360d8b7ad4ff4132c4e03e8f7b
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "192000000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.2
size: "97000000"
- id: f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.32.2
size: "94000000"
- id: b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.2
size: "89700000"
- id: a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "150000000"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61800000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-758100
size: "4940000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: dfc02ad17f47198511ff0fbc2b8d1ba216cb07f64bf52683e8c4dd625c1b9854
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-758100
size: "30"

                                                
                                                
functional_test.go:286: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-758100 image ls --format yaml --alsologtostderr:
I0317 10:59:19.627733   12748 out.go:345] Setting OutFile to fd 1708 ...
I0317 10:59:19.742496   12748 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0317 10:59:19.742496   12748 out.go:358] Setting ErrFile to fd 1812...
I0317 10:59:19.742593   12748 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0317 10:59:19.764331   12748 config.go:182] Loaded profile config "functional-758100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0317 10:59:19.764331   12748 config.go:182] Loaded profile config "functional-758100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0317 10:59:19.766312   12748 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-758100 ).state
I0317 10:59:22.266419   12748 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0317 10:59:22.266509   12748 main.go:141] libmachine: [stderr =====>] : 
I0317 10:59:22.281430   12748 ssh_runner.go:195] Run: systemctl --version
I0317 10:59:22.281430   12748 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-758100 ).state
I0317 10:59:24.853122   12748 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0317 10:59:24.853202   12748 main.go:141] libmachine: [stderr =====>] : 
I0317 10:59:24.853337   12748 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-758100 ).networkadapters[0]).ipaddresses[0]
I0317 10:59:27.634565   12748 main.go:141] libmachine: [stdout =====>] : 172.25.21.21

                                                
                                                
I0317 10:59:27.634565   12748 main.go:141] libmachine: [stderr =====>] : 
I0317 10:59:27.635489   12748 sshutil.go:53] new ssh client: &{IP:172.25.21.21 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-758100\id_rsa Username:docker}
I0317 10:59:27.744937   12748 ssh_runner.go:235] Completed: systemctl --version: (5.4634707s)
I0317 10:59:27.758378   12748 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (8.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (29.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:325: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 ssh pgrep buildkitd
functional_test.go:325: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-758100 ssh pgrep buildkitd: exit status 1 (10.5095958s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:332: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 image build -t localhost/my-image:functional-758100 testdata\build --alsologtostderr
functional_test.go:332: (dbg) Done: out/minikube-windows-amd64.exe -p functional-758100 image build -t localhost/my-image:functional-758100 testdata\build --alsologtostderr: (11.4000165s)
functional_test.go:340: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-758100 image build -t localhost/my-image:functional-758100 testdata\build --alsologtostderr:
I0317 10:59:34.301179    8652 out.go:345] Setting OutFile to fd 1692 ...
I0317 10:59:34.405267    8652 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0317 10:59:34.405267    8652 out.go:358] Setting ErrFile to fd 1600...
I0317 10:59:34.405267    8652 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0317 10:59:34.428083    8652 config.go:182] Loaded profile config "functional-758100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0317 10:59:34.450794    8652 config.go:182] Loaded profile config "functional-758100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0317 10:59:34.451992    8652 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-758100 ).state
I0317 10:59:36.793331    8652 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0317 10:59:36.793331    8652 main.go:141] libmachine: [stderr =====>] : 
I0317 10:59:36.806186    8652 ssh_runner.go:195] Run: systemctl --version
I0317 10:59:36.806186    8652 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-758100 ).state
I0317 10:59:39.120127    8652 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0317 10:59:39.120192    8652 main.go:141] libmachine: [stderr =====>] : 
I0317 10:59:39.120192    8652 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-758100 ).networkadapters[0]).ipaddresses[0]
I0317 10:59:41.779185    8652 main.go:141] libmachine: [stdout =====>] : 172.25.21.21

                                                
                                                
I0317 10:59:41.779892    8652 main.go:141] libmachine: [stderr =====>] : 
I0317 10:59:41.780234    8652 sshutil.go:53] new ssh client: &{IP:172.25.21.21 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-758100\id_rsa Username:docker}
I0317 10:59:41.885134    8652 ssh_runner.go:235] Completed: systemctl --version: (5.0787719s)
I0317 10:59:41.885134    8652 build_images.go:161] Building image from path: C:\Users\jenkins.minikube6\AppData\Local\Temp\build.24257471.tar
I0317 10:59:41.898846    8652 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0317 10:59:41.929247    8652 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.24257471.tar
I0317 10:59:41.937949    8652 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.24257471.tar: stat -c "%s %y" /var/lib/minikube/build/build.24257471.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.24257471.tar': No such file or directory
I0317 10:59:41.937949    8652 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\AppData\Local\Temp\build.24257471.tar --> /var/lib/minikube/build/build.24257471.tar (3072 bytes)
I0317 10:59:42.004547    8652 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.24257471
I0317 10:59:42.040058    8652 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.24257471 -xf /var/lib/minikube/build/build.24257471.tar
I0317 10:59:42.058308    8652 docker.go:360] Building image: /var/lib/minikube/build/build.24257471
I0317 10:59:42.069248    8652 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-758100 /var/lib/minikube/build/build.24257471
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile:
#1 transferring dockerfile: 97B 0.0s done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.0s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.1s

                                                
                                                
#4 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#4 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#4 DONE 0.1s

                                                
                                                
#5 [internal] load build context
#5 transferring context: 62B done
#5 DONE 0.2s

                                                
                                                
#4 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#4 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#4 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#4 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#4 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.0s
#4 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.3s done
#4 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s
#4 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.2s done
#4 DONE 0.9s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.4s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.2s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.2s done
#8 writing image sha256:87848bc237b549972feccf316bd63ba3c27384678843ead54d8cf9945fb2a279 0.0s done
#8 naming to localhost/my-image:functional-758100 0.0s done
#8 DONE 0.2s
I0317 10:59:45.474432    8652 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-758100 /var/lib/minikube/build/build.24257471: (3.4051615s)
I0317 10:59:45.487122    8652 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.24257471
I0317 10:59:45.526049    8652 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.24257471.tar
I0317 10:59:45.553848    8652 build_images.go:217] Built localhost/my-image:functional-758100 from C:\Users\jenkins.minikube6\AppData\Local\Temp\build.24257471.tar
I0317 10:59:45.553983    8652 build_images.go:133] succeeded building to: functional-758100
I0317 10:59:45.553983    8652 build_images.go:134] failed building to: 
functional_test.go:468: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 image ls
functional_test.go:468: (dbg) Done: out/minikube-windows-amd64.exe -p functional-758100 image ls: (7.2725633s)
E0317 11:00:00.893821    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-331000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0317 11:03:37.821486    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-331000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (29.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:359: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:359: (dbg) Done: docker pull kicbase/echo-server:1.0: (2.1280878s)
functional_test.go:364: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-758100
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (19.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:372: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 image load --daemon kicbase/echo-server:functional-758100 --alsologtostderr
functional_test.go:372: (dbg) Done: out/minikube-windows-amd64.exe -p functional-758100 image load --daemon kicbase/echo-server:functional-758100 --alsologtostderr: (10.272625s)
functional_test.go:468: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 image ls
functional_test.go:468: (dbg) Done: out/minikube-windows-amd64.exe -p functional-758100 image ls: (8.8988744s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (19.17s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (46.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:516: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-758100 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-758100"
functional_test.go:516: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-758100 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-758100": (30.5825876s)
functional_test.go:539: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-758100 docker-env | Invoke-Expression ; docker images"
functional_test.go:539: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-758100 docker-env | Invoke-Expression ; docker images": (16.1314837s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (46.73s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (2.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2136: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 update-context --alsologtostderr -v=2
functional_test.go:2136: (dbg) Done: out/minikube-windows-amd64.exe -p functional-758100 update-context --alsologtostderr -v=2: (2.9295513s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (2.93s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2136: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 update-context --alsologtostderr -v=2
functional_test.go:2136: (dbg) Done: out/minikube-windows-amd64.exe -p functional-758100 update-context --alsologtostderr -v=2: (2.9650662s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.97s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (2.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2136: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 update-context --alsologtostderr -v=2
functional_test.go:2136: (dbg) Done: out/minikube-windows-amd64.exe -p functional-758100 update-context --alsologtostderr -v=2: (2.9083913s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (2.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (17.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:382: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 image load --daemon kicbase/echo-server:functional-758100 --alsologtostderr
functional_test.go:382: (dbg) Done: out/minikube-windows-amd64.exe -p functional-758100 image load --daemon kicbase/echo-server:functional-758100 --alsologtostderr: (9.3693211s)
functional_test.go:468: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 image ls
functional_test.go:468: (dbg) Done: out/minikube-windows-amd64.exe -p functional-758100 image ls: (8.3242138s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (17.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (18.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:252: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:257: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-758100
functional_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 image load --daemon kicbase/echo-server:functional-758100 --alsologtostderr
functional_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe -p functional-758100 image load --daemon kicbase/echo-server:functional-758100 --alsologtostderr: (9.7193085s)
functional_test.go:468: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 image ls
functional_test.go:468: (dbg) Done: out/minikube-windows-amd64.exe -p functional-758100 image ls: (8.0944487s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (18.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (8.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:397: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 image save kicbase/echo-server:functional-758100 C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar --alsologtostderr
functional_test.go:397: (dbg) Done: out/minikube-windows-amd64.exe -p functional-758100 image save kicbase/echo-server:functional-758100 C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar --alsologtostderr: (8.5108855s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (8.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-758100 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1462: (dbg) Run:  kubectl --context functional-758100 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-mp6rg" [cfdf0e15-d862-48ca-8a5d-06d288aa27e2] Pending
helpers_test.go:344: "hello-node-fcfd88b6f-mp6rg" [cfdf0e15-d862-48ca-8a5d-06d288aa27e2] Running
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.0056657s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (14.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1476: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 service list
functional_test.go:1476: (dbg) Done: out/minikube-windows-amd64.exe -p functional-758100 service list: (14.2617719s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (14.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (17.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:409: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 image rm kicbase/echo-server:functional-758100 --alsologtostderr
functional_test.go:409: (dbg) Done: out/minikube-windows-amd64.exe -p functional-758100 image rm kicbase/echo-server:functional-758100 --alsologtostderr: (8.7070899s)
functional_test.go:468: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 image ls
functional_test.go:468: (dbg) Done: out/minikube-windows-amd64.exe -p functional-758100 image ls: (8.867001s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (17.57s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (15.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1287: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1292: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
functional_test.go:1292: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (14.8244834s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (15.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (14.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1506: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 service list -o json
functional_test.go:1506: (dbg) Done: out/minikube-windows-amd64.exe -p functional-758100 service list -o json: (14.7068967s)
functional_test.go:1511: Took "14.7075622s" to run "out/minikube-windows-amd64.exe -p functional-758100 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (14.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (17.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:426: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 image load C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar --alsologtostderr
functional_test.go:426: (dbg) Done: out/minikube-windows-amd64.exe -p functional-758100 image load C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar --alsologtostderr: (9.242487s)
functional_test.go:468: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 image ls
functional_test.go:468: (dbg) Done: out/minikube-windows-amd64.exe -p functional-758100 image ls: (8.710808s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (17.95s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (14.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1327: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1327: (dbg) Done: out/minikube-windows-amd64.exe profile list: (14.6257021s)
functional_test.go:1332: Took "14.626326s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1341: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1346: Took "254.398ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (14.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (9.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:436: (dbg) Run:  docker rmi kicbase/echo-server:functional-758100
functional_test.go:441: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 image save --daemon kicbase/echo-server:functional-758100 --alsologtostderr
functional_test.go:441: (dbg) Done: out/minikube-windows-amd64.exe -p functional-758100 image save --daemon kicbase/echo-server:functional-758100 --alsologtostderr: (9.1844175s)
functional_test.go:449: (dbg) Run:  docker image inspect kicbase/echo-server:functional-758100
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (9.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (14.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1378: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1378: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (14.5834847s)
functional_test.go:1383: Took "14.5847937s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1391: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1396: Took "295.3075ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (14.88s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2273: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 version --short
--- PASS: TestFunctional/parallel/Version/short (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (8.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2287: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-758100 version -o=json --components
functional_test.go:2287: (dbg) Done: out/minikube-windows-amd64.exe -p functional-758100 version -o=json --components: (8.1601158s)
--- PASS: TestFunctional/parallel/Version/components (8.16s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (9.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-758100 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-758100 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-758100 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-758100 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1640: OpenProcess: The parameter is incorrect.
helpers_test.go:508: unable to kill pid 11720: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (9.66s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-758100 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (16.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-758100 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [31cea2f9-85f8-41f7-b730-012934e52581] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [31cea2f9-85f8-41f7-b730-012934e52581] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 16.0059469s
I0317 10:59:08.196946    8940 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (16.68s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-758100 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 8388: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.19s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-758100
--- PASS: TestFunctional/delete_echo-server_images (0.19s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.09s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:215: (dbg) Run:  docker rmi -f localhost/my-image:functional-758100
--- PASS: TestFunctional/delete_my-image_image (0.09s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.09s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:223: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-758100
--- PASS: TestFunctional/delete_minikube_cached_images (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (718.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-450500 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv
E0317 11:06:12.676171    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-758100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0317 11:06:12.683916    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-758100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0317 11:06:12.695927    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-758100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0317 11:06:12.718333    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-758100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0317 11:06:12.760610    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-758100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0317 11:06:12.842829    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-758100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0317 11:06:13.005724    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-758100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0317 11:06:13.328160    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-758100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0317 11:06:13.970751    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-758100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0317 11:06:15.252966    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-758100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0317 11:06:17.815522    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-758100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0317 11:06:22.938780    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-758100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0317 11:06:33.180937    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-758100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0317 11:06:53.663862    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-758100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0317 11:07:34.626638    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-758100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0317 11:08:37.822860    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-331000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0317 11:08:56.549771    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-758100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0317 11:11:12.677910    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-758100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0317 11:11:40.394150    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-758100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0317 11:13:37.825381    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-331000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0317 11:16:12.680546    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-758100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p ha-450500 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv: (11m21.5928564s)
ha_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-450500 status -v=7 --alsologtostderr
E0317 11:16:40.904429    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-331000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe -p ha-450500 status -v=7 --alsologtostderr: (36.9112551s)
--- PASS: TestMultiControlPlane/serial/StartCluster (718.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (13.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-450500 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-450500 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-450500 -- rollout status deployment/busybox: (5.8058714s)
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-450500 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-450500 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-450500 -- exec busybox-58667487b6-9977c -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-450500 -- exec busybox-58667487b6-9977c -- nslookup kubernetes.io: (1.868588s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-450500 -- exec busybox-58667487b6-w6ngz -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-450500 -- exec busybox-58667487b6-xlpx5 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-450500 -- exec busybox-58667487b6-9977c -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-450500 -- exec busybox-58667487b6-w6ngz -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-450500 -- exec busybox-58667487b6-xlpx5 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-450500 -- exec busybox-58667487b6-9977c -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-450500 -- exec busybox-58667487b6-w6ngz -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-450500 -- exec busybox-58667487b6-xlpx5 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (13.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (268.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe node add -p ha-450500 -v=7 --alsologtostderr
E0317 11:21:12.682193    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-758100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe node add -p ha-450500 -v=7 --alsologtostderr: (3m39.3842474s)
ha_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-450500 status -v=7 --alsologtostderr
E0317 11:22:35.762303    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-758100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:234: (dbg) Done: out/minikube-windows-amd64.exe -p ha-450500 status -v=7 --alsologtostderr: (49.3333711s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (268.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-450500 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (49.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
E0317 11:23:37.830640    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-331000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (49.7987693s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (49.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (650.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-450500 status --output json -v=7 --alsologtostderr
ha_test.go:328: (dbg) Done: out/minikube-windows-amd64.exe -p ha-450500 status --output json -v=7 --alsologtostderr: (49.2411987s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-450500 cp testdata\cp-test.txt ha-450500:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-450500 cp testdata\cp-test.txt ha-450500:/home/docker/cp-test.txt: (9.8151642s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-450500 ssh -n ha-450500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-450500 ssh -n ha-450500 "sudo cat /home/docker/cp-test.txt": (9.6613769s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-450500 cp ha-450500:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile1823244443\001\cp-test_ha-450500.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-450500 cp ha-450500:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile1823244443\001\cp-test_ha-450500.txt: (9.7705119s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-450500 ssh -n ha-450500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-450500 ssh -n ha-450500 "sudo cat /home/docker/cp-test.txt": (9.8328433s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-450500 cp ha-450500:/home/docker/cp-test.txt ha-450500-m02:/home/docker/cp-test_ha-450500_ha-450500-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-450500 cp ha-450500:/home/docker/cp-test.txt ha-450500-m02:/home/docker/cp-test_ha-450500_ha-450500-m02.txt: (17.3614205s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-450500 ssh -n ha-450500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-450500 ssh -n ha-450500 "sudo cat /home/docker/cp-test.txt": (9.8573663s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-450500 ssh -n ha-450500-m02 "sudo cat /home/docker/cp-test_ha-450500_ha-450500-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-450500 ssh -n ha-450500-m02 "sudo cat /home/docker/cp-test_ha-450500_ha-450500-m02.txt": (9.7307413s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-450500 cp ha-450500:/home/docker/cp-test.txt ha-450500-m03:/home/docker/cp-test_ha-450500_ha-450500-m03.txt
E0317 11:26:12.684616    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-758100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-450500 cp ha-450500:/home/docker/cp-test.txt ha-450500-m03:/home/docker/cp-test_ha-450500_ha-450500-m03.txt: (17.3110801s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-450500 ssh -n ha-450500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-450500 ssh -n ha-450500 "sudo cat /home/docker/cp-test.txt": (10.1649141s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-450500 ssh -n ha-450500-m03 "sudo cat /home/docker/cp-test_ha-450500_ha-450500-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-450500 ssh -n ha-450500-m03 "sudo cat /home/docker/cp-test_ha-450500_ha-450500-m03.txt": (9.9267472s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-450500 cp ha-450500:/home/docker/cp-test.txt ha-450500-m04:/home/docker/cp-test_ha-450500_ha-450500-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-450500 cp ha-450500:/home/docker/cp-test.txt ha-450500-m04:/home/docker/cp-test_ha-450500_ha-450500-m04.txt: (17.0477445s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-450500 ssh -n ha-450500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-450500 ssh -n ha-450500 "sudo cat /home/docker/cp-test.txt": (9.8386057s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-450500 ssh -n ha-450500-m04 "sudo cat /home/docker/cp-test_ha-450500_ha-450500-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-450500 ssh -n ha-450500-m04 "sudo cat /home/docker/cp-test_ha-450500_ha-450500-m04.txt": (9.8505995s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-450500 cp testdata\cp-test.txt ha-450500-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-450500 cp testdata\cp-test.txt ha-450500-m02:/home/docker/cp-test.txt: (10.0562472s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-450500 ssh -n ha-450500-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-450500 ssh -n ha-450500-m02 "sudo cat /home/docker/cp-test.txt": (9.83268s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-450500 cp ha-450500-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile1823244443\001\cp-test_ha-450500-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-450500 cp ha-450500-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile1823244443\001\cp-test_ha-450500-m02.txt: (9.7979805s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-450500 ssh -n ha-450500-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-450500 ssh -n ha-450500-m02 "sudo cat /home/docker/cp-test.txt": (9.71046s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-450500 cp ha-450500-m02:/home/docker/cp-test.txt ha-450500:/home/docker/cp-test_ha-450500-m02_ha-450500.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-450500 cp ha-450500-m02:/home/docker/cp-test.txt ha-450500:/home/docker/cp-test_ha-450500-m02_ha-450500.txt: (17.1825501s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-450500 ssh -n ha-450500-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-450500 ssh -n ha-450500-m02 "sudo cat /home/docker/cp-test.txt": (9.8585574s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-450500 ssh -n ha-450500 "sudo cat /home/docker/cp-test_ha-450500-m02_ha-450500.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-450500 ssh -n ha-450500 "sudo cat /home/docker/cp-test_ha-450500-m02_ha-450500.txt": (10.3037995s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-450500 cp ha-450500-m02:/home/docker/cp-test.txt ha-450500-m03:/home/docker/cp-test_ha-450500-m02_ha-450500-m03.txt
E0317 11:28:37.833556    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-331000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-450500 cp ha-450500-m02:/home/docker/cp-test.txt ha-450500-m03:/home/docker/cp-test_ha-450500-m02_ha-450500-m03.txt: (17.142062s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-450500 ssh -n ha-450500-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-450500 ssh -n ha-450500-m02 "sudo cat /home/docker/cp-test.txt": (9.8141682s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-450500 ssh -n ha-450500-m03 "sudo cat /home/docker/cp-test_ha-450500-m02_ha-450500-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-450500 ssh -n ha-450500-m03 "sudo cat /home/docker/cp-test_ha-450500-m02_ha-450500-m03.txt": (9.8445843s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-450500 cp ha-450500-m02:/home/docker/cp-test.txt ha-450500-m04:/home/docker/cp-test_ha-450500-m02_ha-450500-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-450500 cp ha-450500-m02:/home/docker/cp-test.txt ha-450500-m04:/home/docker/cp-test_ha-450500-m02_ha-450500-m04.txt: (17.162703s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-450500 ssh -n ha-450500-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-450500 ssh -n ha-450500-m02 "sudo cat /home/docker/cp-test.txt": (9.7019719s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-450500 ssh -n ha-450500-m04 "sudo cat /home/docker/cp-test_ha-450500-m02_ha-450500-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-450500 ssh -n ha-450500-m04 "sudo cat /home/docker/cp-test_ha-450500-m02_ha-450500-m04.txt": (9.8153472s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-450500 cp testdata\cp-test.txt ha-450500-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-450500 cp testdata\cp-test.txt ha-450500-m03:/home/docker/cp-test.txt: (9.9258459s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-450500 ssh -n ha-450500-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-450500 ssh -n ha-450500-m03 "sudo cat /home/docker/cp-test.txt": (9.8181s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-450500 cp ha-450500-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile1823244443\001\cp-test_ha-450500-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-450500 cp ha-450500-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile1823244443\001\cp-test_ha-450500-m03.txt: (10.3160257s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-450500 ssh -n ha-450500-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-450500 ssh -n ha-450500-m03 "sudo cat /home/docker/cp-test.txt": (9.8984264s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-450500 cp ha-450500-m03:/home/docker/cp-test.txt ha-450500:/home/docker/cp-test_ha-450500-m03_ha-450500.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-450500 cp ha-450500-m03:/home/docker/cp-test.txt ha-450500:/home/docker/cp-test_ha-450500-m03_ha-450500.txt: (17.443103s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-450500 ssh -n ha-450500-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-450500 ssh -n ha-450500-m03 "sudo cat /home/docker/cp-test.txt": (9.7553797s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-450500 ssh -n ha-450500 "sudo cat /home/docker/cp-test_ha-450500-m03_ha-450500.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-450500 ssh -n ha-450500 "sudo cat /home/docker/cp-test_ha-450500-m03_ha-450500.txt": (9.7880943s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-450500 cp ha-450500-m03:/home/docker/cp-test.txt ha-450500-m02:/home/docker/cp-test_ha-450500-m03_ha-450500-m02.txt
E0317 11:31:12.687348    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-758100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-450500 cp ha-450500-m03:/home/docker/cp-test.txt ha-450500-m02:/home/docker/cp-test_ha-450500-m03_ha-450500-m02.txt: (17.102849s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-450500 ssh -n ha-450500-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-450500 ssh -n ha-450500-m03 "sudo cat /home/docker/cp-test.txt": (9.7305264s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-450500 ssh -n ha-450500-m02 "sudo cat /home/docker/cp-test_ha-450500-m03_ha-450500-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-450500 ssh -n ha-450500-m02 "sudo cat /home/docker/cp-test_ha-450500-m03_ha-450500-m02.txt": (9.8755486s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-450500 cp ha-450500-m03:/home/docker/cp-test.txt ha-450500-m04:/home/docker/cp-test_ha-450500-m03_ha-450500-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-450500 cp ha-450500-m03:/home/docker/cp-test.txt ha-450500-m04:/home/docker/cp-test_ha-450500-m03_ha-450500-m04.txt: (17.0872776s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-450500 ssh -n ha-450500-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-450500 ssh -n ha-450500-m03 "sudo cat /home/docker/cp-test.txt": (9.80017s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-450500 ssh -n ha-450500-m04 "sudo cat /home/docker/cp-test_ha-450500-m03_ha-450500-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-450500 ssh -n ha-450500-m04 "sudo cat /home/docker/cp-test_ha-450500-m03_ha-450500-m04.txt": (9.922595s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-450500 cp testdata\cp-test.txt ha-450500-m04:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-450500 cp testdata\cp-test.txt ha-450500-m04:/home/docker/cp-test.txt: (9.8884903s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-450500 ssh -n ha-450500-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-450500 ssh -n ha-450500-m04 "sudo cat /home/docker/cp-test.txt": (9.8875639s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-450500 cp ha-450500-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile1823244443\001\cp-test_ha-450500-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-450500 cp ha-450500-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile1823244443\001\cp-test_ha-450500-m04.txt: (9.8909302s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-450500 ssh -n ha-450500-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-450500 ssh -n ha-450500-m04 "sudo cat /home/docker/cp-test.txt": (9.8707005s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-450500 cp ha-450500-m04:/home/docker/cp-test.txt ha-450500:/home/docker/cp-test_ha-450500-m04_ha-450500.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-450500 cp ha-450500-m04:/home/docker/cp-test.txt ha-450500:/home/docker/cp-test_ha-450500-m04_ha-450500.txt: (17.2863745s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-450500 ssh -n ha-450500-m04 "sudo cat /home/docker/cp-test.txt"
E0317 11:33:20.915730    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-331000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-450500 ssh -n ha-450500-m04 "sudo cat /home/docker/cp-test.txt": (9.937836s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-450500 ssh -n ha-450500 "sudo cat /home/docker/cp-test_ha-450500-m04_ha-450500.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-450500 ssh -n ha-450500 "sudo cat /home/docker/cp-test_ha-450500-m04_ha-450500.txt": (9.9404309s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-450500 cp ha-450500-m04:/home/docker/cp-test.txt ha-450500-m02:/home/docker/cp-test_ha-450500-m04_ha-450500-m02.txt
E0317 11:33:37.835841    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-331000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-450500 cp ha-450500-m04:/home/docker/cp-test.txt ha-450500-m02:/home/docker/cp-test_ha-450500-m04_ha-450500-m02.txt: (17.1733452s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-450500 ssh -n ha-450500-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-450500 ssh -n ha-450500-m04 "sudo cat /home/docker/cp-test.txt": (9.7795119s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-450500 ssh -n ha-450500-m02 "sudo cat /home/docker/cp-test_ha-450500-m04_ha-450500-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-450500 ssh -n ha-450500-m02 "sudo cat /home/docker/cp-test_ha-450500-m04_ha-450500-m02.txt": (9.6971614s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-450500 cp ha-450500-m04:/home/docker/cp-test.txt ha-450500-m03:/home/docker/cp-test_ha-450500-m04_ha-450500-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-450500 cp ha-450500-m04:/home/docker/cp-test.txt ha-450500-m03:/home/docker/cp-test_ha-450500-m04_ha-450500-m03.txt: (17.1987441s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-450500 ssh -n ha-450500-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-450500 ssh -n ha-450500-m04 "sudo cat /home/docker/cp-test.txt": (9.8748476s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-450500 ssh -n ha-450500-m03 "sudo cat /home/docker/cp-test_ha-450500-m04_ha-450500-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-450500 ssh -n ha-450500-m03 "sudo cat /home/docker/cp-test_ha-450500-m04_ha-450500-m03.txt": (9.9669374s)
--- PASS: TestMultiControlPlane/serial/CopyFile (650.53s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (197.69s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-134500 --driver=hyperv
E0317 11:38:37.838413    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-331000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0317 11:39:15.773830    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-758100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0317 11:41:12.692555    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-758100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-134500 --driver=hyperv: (3m17.6887232s)
--- PASS: TestImageBuild/serial/Setup (197.69s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (10.77s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-134500
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-134500: (10.7716434s)
--- PASS: TestImageBuild/serial/NormalBuild (10.77s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (8.92s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-134500
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-134500: (8.9205198s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (8.92s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (8.27s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-134500
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-134500: (8.2655382s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (8.27s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (8.37s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-134500
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-134500: (8.3647607s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (8.37s)

                                                
                                    
x
+
TestJSONOutput/start/Command (204.44s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-814700 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv
E0317 11:43:37.840443    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-331000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0317 11:46:12.695604    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-758100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-814700 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv: (3m24.4415951s)
--- PASS: TestJSONOutput/start/Command (204.44s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (7.95s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-814700 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-814700 --output=json --user=testUser: (7.946904s)
--- PASS: TestJSONOutput/pause/Command (7.95s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (7.95s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-814700 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-814700 --output=json --user=testUser: (7.9532681s)
--- PASS: TestJSONOutput/unpause/Command (7.95s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (35.11s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-814700 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-814700 --output=json --user=testUser: (35.1126258s)
--- PASS: TestJSONOutput/stop/Command (35.11s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.95s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-253800 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-253800 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (281.7282ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cd7d411d-767b-4df9-98ed-50b505f8d7fd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-253800] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5608 Build 19045.5608","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5e48033b-5577-4d48-8dfd-132db8396199","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube6\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"6ca3bdef-19c4-40fa-a527-d99d85118da8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"88929e48-d175-42f1-8131-02d358a1f8c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"af6799b1-86e3-47bd-ae91-ed3eb36d5ef5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20535"}}
	{"specversion":"1.0","id":"a261d0df-5a32-4ce7-986f-2bfb447c01f4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"64dd28b6-0211-43e5-810c-73a3588cee0e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-253800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-253800
--- PASS: TestErrorJSONOutput (0.95s)

                                                
                                    
x
+
TestMainNoArgs (0.24s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.24s)

                                                
                                    
x
+
TestMinikubeProfile (536.88s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-443700 --driver=hyperv
E0317 11:48:37.843285    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-331000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0317 11:50:00.928259    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-331000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-443700 --driver=hyperv: (3m18.2860467s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-443700 --driver=hyperv
E0317 11:51:12.698670    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-758100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0317 11:53:37.846810    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-331000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-443700 --driver=hyperv: (3m22.1958803s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-443700
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (24.503755s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-443700
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (24.5156936s)
helpers_test.go:175: Cleaning up "second-443700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-443700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-443700: (40.5986447s)
helpers_test.go:175: Cleaning up "first-443700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-443700
E0317 11:55:55.785334    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-758100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0317 11:56:12.701342    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-758100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-443700: (46.1052399s)
--- PASS: TestMinikubeProfile (536.88s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (158.16s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-720200 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv
E0317 11:58:37.849787    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-331000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-720200 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m37.162029s)
--- PASS: TestMountStart/serial/StartWithMountFirst (158.16s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (9.78s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-720200 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-720200 ssh -- ls /minikube-host: (9.7830971s)
--- PASS: TestMountStart/serial/VerifyMountFirst (9.78s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (158.44s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-803900 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv
E0317 12:01:12.704904    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-758100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-803900 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m37.4390743s)
--- PASS: TestMountStart/serial/StartWithMountSecond (158.44s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (9.78s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-803900 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-803900 ssh -- ls /minikube-host: (9.7775462s)
--- PASS: TestMountStart/serial/VerifyMountSecond (9.78s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (31.28s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-720200 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-720200 --alsologtostderr -v=5: (31.2742711s)
--- PASS: TestMountStart/serial/DeleteFirst (31.28s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (9.6s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-803900 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-803900 ssh -- ls /minikube-host: (9.5993965s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (9.60s)

                                                
                                    
x
+
TestMountStart/serial/Stop (26.79s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-803900
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-803900: (26.7840359s)
--- PASS: TestMountStart/serial/Stop (26.79s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (119.35s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-803900
E0317 12:03:37.852510    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-331000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
mount_start_test.go:166: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-803900: (1m58.3467898s)
--- PASS: TestMountStart/serial/RestartStopped (119.35s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (9.48s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-803900 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-803900 ssh -- ls /minikube-host: (9.4824904s)
--- PASS: TestMountStart/serial/VerifyMountPostStop (9.48s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (446.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-781100 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv
E0317 12:06:12.707093    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-758100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0317 12:06:40.940182    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-331000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0317 12:08:37.855499    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-331000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0317 12:11:12.710793    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-758100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0317 12:12:35.798386    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-758100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-781100 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv: (7m2.0713802s)
multinode_test.go:102: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-781100 status --alsologtostderr
multinode_test.go:102: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-781100 status --alsologtostderr: (24.1671924s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (446.24s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (9.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-781100 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-781100 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-781100 -- rollout status deployment/busybox: (3.6241299s)
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-781100 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-781100 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-781100 -- exec busybox-58667487b6-kvm5b -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-781100 -- exec busybox-58667487b6-kvm5b -- nslookup kubernetes.io: (1.9597418s)
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-781100 -- exec busybox-58667487b6-vnkbn -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-781100 -- exec busybox-58667487b6-kvm5b -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-781100 -- exec busybox-58667487b6-vnkbn -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-781100 -- exec busybox-58667487b6-kvm5b -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-781100 -- exec busybox-58667487b6-vnkbn -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (9.54s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (243.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-781100 -v 3 --alsologtostderr
E0317 12:16:12.714410    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-758100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-781100 -v 3 --alsologtostderr: (3m27.6248106s)
multinode_test.go:127: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-781100 status --alsologtostderr
multinode_test.go:127: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-781100 status --alsologtostderr: (36.1945888s)
--- PASS: TestMultiNode/serial/AddNode (243.82s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-781100 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.19s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (35.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
E0317 12:18:37.860909    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-331000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:143: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (35.9351391s)
--- PASS: TestMultiNode/serial/ProfileList (35.94s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (365.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-781100 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-781100 status --output json --alsologtostderr: (35.8523505s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-781100 cp testdata\cp-test.txt multinode-781100:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-781100 cp testdata\cp-test.txt multinode-781100:/home/docker/cp-test.txt: (9.4687379s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-781100 ssh -n multinode-781100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-781100 ssh -n multinode-781100 "sudo cat /home/docker/cp-test.txt": (9.4832684s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-781100 cp multinode-781100:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile2643708098\001\cp-test_multinode-781100.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-781100 cp multinode-781100:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile2643708098\001\cp-test_multinode-781100.txt: (9.6472385s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-781100 ssh -n multinode-781100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-781100 ssh -n multinode-781100 "sudo cat /home/docker/cp-test.txt": (9.6304457s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-781100 cp multinode-781100:/home/docker/cp-test.txt multinode-781100-m02:/home/docker/cp-test_multinode-781100_multinode-781100-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-781100 cp multinode-781100:/home/docker/cp-test.txt multinode-781100-m02:/home/docker/cp-test_multinode-781100_multinode-781100-m02.txt: (17.3142071s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-781100 ssh -n multinode-781100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-781100 ssh -n multinode-781100 "sudo cat /home/docker/cp-test.txt": (9.6406312s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-781100 ssh -n multinode-781100-m02 "sudo cat /home/docker/cp-test_multinode-781100_multinode-781100-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-781100 ssh -n multinode-781100-m02 "sudo cat /home/docker/cp-test_multinode-781100_multinode-781100-m02.txt": (9.4961916s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-781100 cp multinode-781100:/home/docker/cp-test.txt multinode-781100-m03:/home/docker/cp-test_multinode-781100_multinode-781100-m03.txt
E0317 12:21:12.716467    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-758100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-781100 cp multinode-781100:/home/docker/cp-test.txt multinode-781100-m03:/home/docker/cp-test_multinode-781100_multinode-781100-m03.txt: (16.5939833s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-781100 ssh -n multinode-781100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-781100 ssh -n multinode-781100 "sudo cat /home/docker/cp-test.txt": (9.550174s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-781100 ssh -n multinode-781100-m03 "sudo cat /home/docker/cp-test_multinode-781100_multinode-781100-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-781100 ssh -n multinode-781100-m03 "sudo cat /home/docker/cp-test_multinode-781100_multinode-781100-m03.txt": (9.7264124s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-781100 cp testdata\cp-test.txt multinode-781100-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-781100 cp testdata\cp-test.txt multinode-781100-m02:/home/docker/cp-test.txt: (9.586384s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-781100 ssh -n multinode-781100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-781100 ssh -n multinode-781100-m02 "sudo cat /home/docker/cp-test.txt": (9.5681009s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-781100 cp multinode-781100-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile2643708098\001\cp-test_multinode-781100-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-781100 cp multinode-781100-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile2643708098\001\cp-test_multinode-781100-m02.txt: (9.7045101s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-781100 ssh -n multinode-781100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-781100 ssh -n multinode-781100-m02 "sudo cat /home/docker/cp-test.txt": (9.6220728s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-781100 cp multinode-781100-m02:/home/docker/cp-test.txt multinode-781100:/home/docker/cp-test_multinode-781100-m02_multinode-781100.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-781100 cp multinode-781100-m02:/home/docker/cp-test.txt multinode-781100:/home/docker/cp-test_multinode-781100-m02_multinode-781100.txt: (16.5750362s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-781100 ssh -n multinode-781100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-781100 ssh -n multinode-781100-m02 "sudo cat /home/docker/cp-test.txt": (9.5045428s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-781100 ssh -n multinode-781100 "sudo cat /home/docker/cp-test_multinode-781100-m02_multinode-781100.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-781100 ssh -n multinode-781100 "sudo cat /home/docker/cp-test_multinode-781100-m02_multinode-781100.txt": (9.5440379s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-781100 cp multinode-781100-m02:/home/docker/cp-test.txt multinode-781100-m03:/home/docker/cp-test_multinode-781100-m02_multinode-781100-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-781100 cp multinode-781100-m02:/home/docker/cp-test.txt multinode-781100-m03:/home/docker/cp-test_multinode-781100-m02_multinode-781100-m03.txt: (16.5150547s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-781100 ssh -n multinode-781100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-781100 ssh -n multinode-781100-m02 "sudo cat /home/docker/cp-test.txt": (9.5926753s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-781100 ssh -n multinode-781100-m03 "sudo cat /home/docker/cp-test_multinode-781100-m02_multinode-781100-m03.txt"
E0317 12:23:20.952944    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-331000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-781100 ssh -n multinode-781100-m03 "sudo cat /home/docker/cp-test_multinode-781100-m02_multinode-781100-m03.txt": (9.5152469s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-781100 cp testdata\cp-test.txt multinode-781100-m03:/home/docker/cp-test.txt
E0317 12:23:37.864720    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-331000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-781100 cp testdata\cp-test.txt multinode-781100-m03:/home/docker/cp-test.txt: (9.6774315s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-781100 ssh -n multinode-781100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-781100 ssh -n multinode-781100-m03 "sudo cat /home/docker/cp-test.txt": (9.5054805s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-781100 cp multinode-781100-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile2643708098\001\cp-test_multinode-781100-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-781100 cp multinode-781100-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile2643708098\001\cp-test_multinode-781100-m03.txt: (9.5334721s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-781100 ssh -n multinode-781100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-781100 ssh -n multinode-781100-m03 "sudo cat /home/docker/cp-test.txt": (9.518619s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-781100 cp multinode-781100-m03:/home/docker/cp-test.txt multinode-781100:/home/docker/cp-test_multinode-781100-m03_multinode-781100.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-781100 cp multinode-781100-m03:/home/docker/cp-test.txt multinode-781100:/home/docker/cp-test_multinode-781100-m03_multinode-781100.txt: (16.6537803s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-781100 ssh -n multinode-781100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-781100 ssh -n multinode-781100-m03 "sudo cat /home/docker/cp-test.txt": (9.5024903s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-781100 ssh -n multinode-781100 "sudo cat /home/docker/cp-test_multinode-781100-m03_multinode-781100.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-781100 ssh -n multinode-781100 "sudo cat /home/docker/cp-test_multinode-781100-m03_multinode-781100.txt": (9.5730584s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-781100 cp multinode-781100-m03:/home/docker/cp-test.txt multinode-781100-m02:/home/docker/cp-test_multinode-781100-m03_multinode-781100-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-781100 cp multinode-781100-m03:/home/docker/cp-test.txt multinode-781100-m02:/home/docker/cp-test_multinode-781100-m03_multinode-781100-m02.txt: (16.6229721s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-781100 ssh -n multinode-781100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-781100 ssh -n multinode-781100-m03 "sudo cat /home/docker/cp-test.txt": (9.478403s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-781100 ssh -n multinode-781100-m02 "sudo cat /home/docker/cp-test_multinode-781100-m03_multinode-781100-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-781100 ssh -n multinode-781100-m02 "sudo cat /home/docker/cp-test_multinode-781100-m03_multinode-781100-m02.txt": (9.5367618s)
--- PASS: TestMultiNode/serial/CopyFile (365.76s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (76.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-781100 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-781100 node stop m03: (24.7479949s)
multinode_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-781100 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-781100 status: exit status 7 (26.0998771s)

                                                
                                                
-- stdout --
	multinode-781100
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-781100-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-781100-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-781100 status --alsologtostderr
E0317 12:26:12.719542    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-758100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-781100 status --alsologtostderr: exit status 7 (26.1072274s)

                                                
                                                
-- stdout --
	multinode-781100
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-781100-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-781100-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0317 12:26:09.719405   12052 out.go:345] Setting OutFile to fd 1632 ...
	I0317 12:26:09.789772   12052 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 12:26:09.790773   12052 out.go:358] Setting ErrFile to fd 1056...
	I0317 12:26:09.790773   12052 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 12:26:09.804778   12052 out.go:352] Setting JSON to false
	I0317 12:26:09.804778   12052 mustload.go:65] Loading cluster: multinode-781100
	I0317 12:26:09.804778   12052 notify.go:220] Checking for updates...
	I0317 12:26:09.805783   12052 config.go:182] Loaded profile config "multinode-781100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0317 12:26:09.805783   12052 status.go:174] checking status of multinode-781100 ...
	I0317 12:26:09.807821   12052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100 ).state
	I0317 12:26:12.034160   12052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:26:12.034160   12052 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:26:12.034160   12052 status.go:371] multinode-781100 host status = "Running" (err=<nil>)
	I0317 12:26:12.034160   12052 host.go:66] Checking if "multinode-781100" exists ...
	I0317 12:26:12.034887   12052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100 ).state
	I0317 12:26:14.231767   12052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:26:14.231767   12052 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:26:14.231767   12052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100 ).networkadapters[0]).ipaddresses[0]
	I0317 12:26:16.797492   12052 main.go:141] libmachine: [stdout =====>] : 172.25.16.124
	
	I0317 12:26:16.797601   12052 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:26:16.797601   12052 host.go:66] Checking if "multinode-781100" exists ...
	I0317 12:26:16.810962   12052 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0317 12:26:16.810962   12052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100 ).state
	I0317 12:26:18.971709   12052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:26:18.971881   12052 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:26:18.972124   12052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100 ).networkadapters[0]).ipaddresses[0]
	I0317 12:26:21.541482   12052 main.go:141] libmachine: [stdout =====>] : 172.25.16.124
	
	I0317 12:26:21.542542   12052 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:26:21.542593   12052 sshutil.go:53] new ssh client: &{IP:172.25.16.124 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-781100\id_rsa Username:docker}
	I0317 12:26:21.636311   12052 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.825298s)
	I0317 12:26:21.649123   12052 ssh_runner.go:195] Run: systemctl --version
	I0317 12:26:21.670017   12052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0317 12:26:21.702644   12052 kubeconfig.go:125] found "multinode-781100" server: "https://172.25.16.124:8443"
	I0317 12:26:21.702747   12052 api_server.go:166] Checking apiserver status ...
	I0317 12:26:21.712691   12052 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 12:26:21.748845   12052 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2108/cgroup
	W0317 12:26:21.766786   12052 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2108/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0317 12:26:21.778726   12052 ssh_runner.go:195] Run: ls
	I0317 12:26:21.785207   12052 api_server.go:253] Checking apiserver healthz at https://172.25.16.124:8443/healthz ...
	I0317 12:26:21.792810   12052 api_server.go:279] https://172.25.16.124:8443/healthz returned 200:
	ok
	I0317 12:26:21.792810   12052 status.go:463] multinode-781100 apiserver status = Running (err=<nil>)
	I0317 12:26:21.792810   12052 status.go:176] multinode-781100 status: &{Name:multinode-781100 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0317 12:26:21.792810   12052 status.go:174] checking status of multinode-781100-m02 ...
	I0317 12:26:21.793737   12052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100-m02 ).state
	I0317 12:26:23.933034   12052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:26:23.933614   12052 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:26:23.933614   12052 status.go:371] multinode-781100-m02 host status = "Running" (err=<nil>)
	I0317 12:26:23.933614   12052 host.go:66] Checking if "multinode-781100-m02" exists ...
	I0317 12:26:23.934365   12052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100-m02 ).state
	I0317 12:26:26.103515   12052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:26:26.103515   12052 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:26:26.103515   12052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 12:26:28.693368   12052 main.go:141] libmachine: [stdout =====>] : 172.25.25.119
	
	I0317 12:26:28.693368   12052 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:26:28.694282   12052 host.go:66] Checking if "multinode-781100-m02" exists ...
	I0317 12:26:28.706123   12052 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0317 12:26:28.706123   12052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100-m02 ).state
	I0317 12:26:30.857962   12052 main.go:141] libmachine: [stdout =====>] : Running
	
	I0317 12:26:30.859059   12052 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:26:30.859059   12052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-781100-m02 ).networkadapters[0]).ipaddresses[0]
	I0317 12:26:33.393108   12052 main.go:141] libmachine: [stdout =====>] : 172.25.25.119
	
	I0317 12:26:33.393108   12052 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:26:33.394042   12052 sshutil.go:53] new ssh client: &{IP:172.25.25.119 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-781100-m02\id_rsa Username:docker}
	I0317 12:26:33.500963   12052 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.7938713s)
	I0317 12:26:33.512871   12052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0317 12:26:33.538510   12052 status.go:176] multinode-781100-m02 status: &{Name:multinode-781100-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0317 12:26:33.538683   12052 status.go:174] checking status of multinode-781100-m03 ...
	I0317 12:26:33.538906   12052 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-781100-m03 ).state
	I0317 12:26:35.681421   12052 main.go:141] libmachine: [stdout =====>] : Off
	
	I0317 12:26:35.681421   12052 main.go:141] libmachine: [stderr =====>] : 
	I0317 12:26:35.681616   12052 status.go:371] multinode-781100-m03 host status = "Stopped" (err=<nil>)
	I0317 12:26:35.681616   12052 status.go:384] host is not running, skipping remaining checks
	I0317 12:26:35.681704   12052 status.go:176] multinode-781100-m03 status: &{Name:multinode-781100-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (76.96s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (196.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-781100 node start m03 -v=7 --alsologtostderr
E0317 12:28:37.868037    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-331000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0317 12:29:15.811603    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-758100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-781100 node start m03 -v=7 --alsologtostderr: (2m40.2161624s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-781100 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-781100 status -v=7 --alsologtostderr: (36.4428899s)
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (196.83s)

                                                
                                    
x
+
TestPreload (521.68s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-073400 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4
E0317 12:38:37.873901    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-331000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0317 12:40:00.965522    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-331000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0317 12:41:12.729942    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-758100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-073400 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4: (4m31.0270422s)
preload_test.go:52: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-073400 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-073400 image pull gcr.io/k8s-minikube/busybox: (8.8605561s)
preload_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-073400
preload_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-073400: (40.1190297s)
preload_test.go:66: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-073400 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv
E0317 12:43:37.878379    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-331000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-073400 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv: (2m32.5278841s)
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-073400 image list
preload_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-073400 image list: (7.3938145s)
helpers_test.go:175: Cleaning up "test-preload-073400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-073400
E0317 12:45:55.824800    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-758100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-073400: (41.7476607s)
--- PASS: TestPreload (521.68s)

                                                
                                    
x
+
TestScheduledStopWindows (335.07s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-360700 --memory=2048 --driver=hyperv
E0317 12:46:12.733162    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-758100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0317 12:48:37.881047    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-331000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-360700 --memory=2048 --driver=hyperv: (3m22.3528746s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-360700 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-360700 --schedule 5m: (10.6419373s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-360700 -n scheduled-stop-360700
scheduled_stop_test.go:191: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-360700 -n scheduled-stop-360700: exit status 1 (10.01194s)
scheduled_stop_test.go:191: status error: exit status 1 (may be ok)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-360700 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-360700 -- sudo systemctl show minikube-scheduled-stop --no-page: (9.7105227s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-360700 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-360700 --schedule 5s: (10.7791275s)
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-360700
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-360700: exit status 7 (2.4275639s)

                                                
                                                
-- stdout --
	scheduled-stop-360700
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-360700 -n scheduled-stop-360700
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-360700 -n scheduled-stop-360700: exit status 7 (2.3986362s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-360700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-360700
E0317 12:51:12.736771    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-758100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-360700: (26.7437309s)
--- PASS: TestScheduledStopWindows (335.07s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (1095.57s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.1946383435.exe start -p running-upgrade-374500 --memory=2200 --vm-driver=hyperv
E0317 12:53:37.885106    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-331000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.1946383435.exe start -p running-upgrade-374500 --memory=2200 --vm-driver=hyperv: (8m23.1630873s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-374500 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
E0317 13:01:12.742862    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-758100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-374500 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: (8m48.5094885s)
helpers_test.go:175: Cleaning up "running-upgrade-374500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-374500
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-374500: (1m3.1223031s)
--- PASS: TestRunningBinaryUpgrade (1095.57s)

                                                
                                    
x
+
TestKubernetesUpgrade (1372.14s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-816300 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperv
E0317 12:56:40.979488    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-331000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
version_upgrade_test.go:222: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-816300 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperv: (8m8.1289068s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-816300
version_upgrade_test.go:227: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-816300: (35.4930296s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-816300 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-816300 status --format={{.Host}}: exit status 7 (2.6000474s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-816300 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=hyperv
E0317 13:06:12.745579    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-758100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-816300 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=hyperv: (6m54.0162388s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-816300 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-816300 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperv
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-816300 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperv: exit status 106 (267.4659ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-816300] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5608 Build 19045.5608
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=20535
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-816300
	    minikube start -p kubernetes-upgrade-816300 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8163002 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.2, by running:
	    
	    minikube start -p kubernetes-upgrade-816300 --kubernetes-version=v1.32.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-816300 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=hyperv
E0317 13:13:20.993265    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-331000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0317 13:13:37.897739    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-331000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
version_upgrade_test.go:275: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-816300 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=hyperv: (6m22.423357s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-816300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-816300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-816300: (49.0070234s)
--- PASS: TestKubernetesUpgrade (1372.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-183300 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-183300 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv: exit status 14 (392.4183ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-183300] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5608 Build 19045.5608
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=20535
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.39s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.8s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.80s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (935.98s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.1173460243.exe start -p stopped-upgrade-112300 --memory=2200 --vm-driver=hyperv
E0317 12:58:37.887534    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-331000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.1173460243.exe start -p stopped-upgrade-112300 --memory=2200 --vm-driver=hyperv: (8m28.6823377s)
version_upgrade_test.go:192: (dbg) Run:  C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.1173460243.exe -p stopped-upgrade-112300 stop
version_upgrade_test.go:192: (dbg) Done: C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.1173460243.exe -p stopped-upgrade-112300 stop: (38.3873912s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-112300 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
E0317 13:08:37.893547    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-331000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-112300 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: (6m28.9089987s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (935.98s)

                                                
                                    
x
+
TestPause/serial/Start (485.74s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-471400 --memory=2048 --install-addons=false --wait=all --driver=hyperv
E0317 13:03:37.890903    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-331000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-471400 --memory=2048 --install-addons=false --wait=all --driver=hyperv: (8m5.7373605s)
--- PASS: TestPause/serial/Start (485.74s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (386.1s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-471400 --alsologtostderr -v=1 --driver=hyperv
E0317 13:11:12.749667    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-758100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
pause_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-471400 --alsologtostderr -v=1 --driver=hyperv: (6m26.0644219s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (386.10s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (10.33s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-112300
version_upgrade_test.go:206: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-112300: (10.3329555s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (10.33s)

                                                
                                    
x
+
TestPause/serial/Pause (8.59s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-471400 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-471400 --alsologtostderr -v=5: (8.5864386s)
--- PASS: TestPause/serial/Pause (8.59s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (13.59s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p pause-471400 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p pause-471400 --output=json --layout=cluster: exit status 2 (13.5921416s)

                                                
                                                
-- stdout --
	{"Name":"pause-471400","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-471400","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (13.59s)

                                                
                                    

Test skip (33/211)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:922: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-758100 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:927: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-758100 --alsologtostderr -v=1] ...
helpers_test.go:502: unable to terminate pid 7024: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.01s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (5.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:991: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-758100 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:991: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-758100 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0269781s)

                                                
                                                
-- stdout --
	* [functional-758100] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5608 Build 19045.5608
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=20535
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	I0317 10:59:08.421877   12780 out.go:345] Setting OutFile to fd 848 ...
	I0317 10:59:08.502048   12780 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 10:59:08.502048   12780 out.go:358] Setting ErrFile to fd 1340...
	I0317 10:59:08.502048   12780 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 10:59:08.520170   12780 out.go:352] Setting JSON to false
	I0317 10:59:08.524166   12780 start.go:129] hostinfo: {"hostname":"minikube6","uptime":2925,"bootTime":1742206223,"procs":186,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5608 Build 19045.5608","kernelVersion":"10.0.19045.5608 Build 19045.5608","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0317 10:59:08.525156   12780 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0317 10:59:08.531189   12780 out.go:177] * [functional-758100] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5608 Build 19045.5608
	I0317 10:59:08.535168   12780 notify.go:220] Checking for updates...
	I0317 10:59:08.538157   12780 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0317 10:59:08.541164   12780 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0317 10:59:08.544161   12780 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0317 10:59:08.547236   12780 out.go:177]   - MINIKUBE_LOCATION=20535
	I0317 10:59:08.550584   12780 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0317 10:59:08.554610   12780 config.go:182] Loaded profile config "functional-758100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0317 10:59:08.555210   12780 driver.go:394] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:997: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/DryRun (5.03s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (5.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1037: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-758100 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
E0317 10:58:37.819418    8940 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-331000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
functional_test.go:1037: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-758100 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0401799s)

                                                
                                                
-- stdout --
	* [functional-758100] minikube v1.35.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.5608 Build 19045.5608
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=20535
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	I0317 10:58:36.936497    4064 out.go:345] Setting OutFile to fd 1600 ...
	I0317 10:58:37.021953    4064 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 10:58:37.021953    4064 out.go:358] Setting ErrFile to fd 1280...
	I0317 10:58:37.021953    4064 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 10:58:37.050926    4064 out.go:352] Setting JSON to false
	I0317 10:58:37.057272    4064 start.go:129] hostinfo: {"hostname":"minikube6","uptime":2893,"bootTime":1742206223,"procs":187,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5608 Build 19045.5608","kernelVersion":"10.0.19045.5608 Build 19045.5608","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0317 10:58:37.057469    4064 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0317 10:58:37.061766    4064 out.go:177] * [functional-758100] minikube v1.35.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.5608 Build 19045.5608
	I0317 10:58:37.068432    4064 notify.go:220] Checking for updates...
	I0317 10:58:37.072090    4064 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0317 10:58:37.076440    4064 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0317 10:58:37.079973    4064 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0317 10:58:37.085611    4064 out.go:177]   - MINIKUBE_LOCATION=20535
	I0317 10:58:37.089842    4064 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0317 10:58:37.093783    4064 config.go:182] Loaded profile config "functional-758100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0317 10:58:37.095069    4064 driver.go:394] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:1042: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/InternationalLanguage (5.04s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:57: skipping: mount broken on hyperv: https://github.com/kubernetes/minikube/issues/5029
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:567: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:230: The test WaitService/IngressIP is broken on hyperv https://github.com/kubernetes/minikube/issues/8381
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:84: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard